<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Undertone]]></title><description><![CDATA[Conversations with the most interesting people on the planet.]]></description><link>https://www.danschulz.co</link><generator>Substack</generator><lastBuildDate>Wed, 06 May 2026 10:34:56 GMT</lastBuildDate><atom:link href="https://www.danschulz.co/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Dan Schulz]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[dan@danschulz.co]]></webMaster><itunes:owner><itunes:email><![CDATA[dan@danschulz.co]]></itunes:email><itunes:name><![CDATA[Dan]]></itunes:name></itunes:owner><itunes:author><![CDATA[Dan]]></itunes:author><googleplay:owner><![CDATA[dan@danschulz.co]]></googleplay:owner><googleplay:email><![CDATA[dan@danschulz.co]]></googleplay:email><googleplay:author><![CDATA[Dan]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Henry Oliver]]></title><description><![CDATA[Late bloomers and literature]]></description><link>https://www.danschulz.co/p/henry-oliver</link><guid isPermaLink="false">https://www.danschulz.co/p/henry-oliver</guid><dc:creator><![CDATA[Dan]]></dc:creator><pubDate>Tue, 06 Aug 2024 10:53:25 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/147386095/2b38c1c9a76a26d1373d379a3e4a574b.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Henry Oliver is the author of the book "Second Act: What Late Bloomers Can Tell You About Success and Reinventing Your Life" and prolific blogger on literature.</p><div id="youtube2-Tb8Hq8IElP0" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Tb8Hq8IElP0&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Tb8Hq8IElP0?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8ae1a47de6a8504efb8c2719e2&quot;,&quot;title&quot;:&quot;Henry Oliver&quot;,&quot;subtitle&quot;:&quot;Dan Schulz&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/5rhMXO75FMUIEfCIKZPDhS&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/5rhMXO75FMUIEfCIKZPDhS" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/undertone/id1693303954?i=1000664459889&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000664459889.jpg&quot;,&quot;title&quot;:&quot;Henry Oliver&quot;,&quot;podcastTitle&quot;:&quot;Undertone&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:4178000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/henry-oliver/id1693303954?i=1000664459889&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2024-08-06T08:00:00Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/undertone/id1693303954?i=1000664459889" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><h3>Timestamps</h3><p>(0:00:00) Intro</p><p>(0:00:38) Late Bloomers</p><p>(0:07:28) Great Man theory of history</p><p>(0:16:14) The Common Reader</p><p>(0:20:56) Rear Window</p><p>(0:22:58) Literature vs film or music</p><p>(0:26:41) Mimesis</p><p>(0:30:07) Artistic themes</p><p>(0:38:11) Knausgaard, Bolano, Ferrante</p><p>(0:39:27) Misreading</p><p>(0:42:54) Harold Bloom</p><p>(0:45:47) Art and economic growth</p><p>(0:47:25) LLMs</p><p>(0:50:55) Hayek</p><p>(0:52:20) Keats</p><p>(0:54:35) Literature vs philosophy</p><p>(0:57:25) Morality</p><p>(0:59:11) Nepotism</p><p>(1:00:00) Shakespeare</p><p>(1:06:57) Henry&#8217;s output</p><h3>Links</h3><ul><li><p><a href="https://www.commonreader.co.uk/">Henry&#8217;s Substack</a></p></li><li><p><a href="https://www.amazon.com/Second-Act-Bloomers-Success-Reinventing/dp/1399813315">Henry&#8217;s book, &#8220;Second Act&#8221;</a></p></li><li><p><a href="https://twitter.com/dnschlz">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Follow Dan on X&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a></p></li><li><p>Subscribe to Undertone on&nbsp;<a href="https://www.youtube.com/channel/UCtRjQ8EA3Via7KXzv9F173Q">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;YouTube&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>,&nbsp;<a href="https://podcasts.apple.com/us/podcast/the-world-of-yesterday/id1693303954">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Apple&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>,&nbsp;<a href="https://open.spotify.com/show/59YkrYwjAgiKAVMNGWPaLE">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Spotify&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>, or&nbsp;<a href="https://www.danschulz.co/">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Substack&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a></p></li><li><p>Watch or listen to previous episodes of Undertone with&nbsp;<a href="https://www.danschulz.co/p/tyler-cowen">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Tyler Cowen&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>,&nbsp;<a href="https://www.danschulz.co/p/vitalik-buterin">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Vitalik Buterin&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>,&nbsp;<a href="https://www.danschulz.co/p/scott-sumner">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Scott Sumner&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>,&nbsp;<a href="https://www.danschulz.co/p/samo-burja">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Samo Burja&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>,&nbsp;<a href="https://www.danschulz.co/p/3-steve-hsu">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Steve Hsu&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>, and&nbsp;<a href="https://www.danschulz.co/">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;more&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a></p></li><li><p>I love hearing from listeners. Email me any time at <a href="mailto:dan@danschulz.co">dan@danschulz.co</a></p></li><li><p>Share anonymous feedback on the podcast:&nbsp;<a href="https://forms.gle/w7LXMCXhdJ5Q2eB9A">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;https://forms.gle/w7LXMCXhdJ5Q2eB9A&#8288;</a></p></li></ul><h3>Transcript</h3><p><strong>[00:00:00] Dan: </strong>This is a conversation with Henry Oliver. Henry is one of the most prolific writers on the internet and his blog <em>The Common Reader,</em> features some of the best writing on literature that I've come across. He also recently published a book titled <em>Second Act: What Late Bloomers Can Tell You About Success and Reinventing Your Life</em>, which I highly recommend. In this conversation we talk about the real meaning of his book, the Great Man Theory of history, Shakespeare, and lots, lots more. I hope you enjoy it.</p><p>All right. Henry, welcome to the show.</p><p><strong>[00:00:36] Henry Oliver: </strong>Good to be here. Thank you.</p><p><strong>[00:00:39] Dan: </strong>I would like your take on a particular reading of your book. I wonder if the book is actually a warning against complacency. What you're really doing is you're predicting that the world in the 21st century, it's going to get much better at spotting outlier talent. If you're not at the top of your game, you're no longer going to be able to rely on say, the career totem pole, or your network, or just being a general high achiever who's really conscientious for you to get ahead.</p><p>What you're really saying here is the late bloomers are coming, get it together, or you'll be forgotten. What do you make of this?</p><p><strong>[00:01:10] Henry: </strong>I wouldn't express it in such strong terms, but yes, that is more or less why I wrote the book. I was frustrated that people were ignoring this group of talent and I was convinced that there's potential being left on the table. Yes, it is a warning against complacency in the sense that you say, but also for the potential late bloomers. People take a lot of optimism from it, but some of the people who've read it carefully have said every page of this book is about people who worked really hard and never gave up. The overall mood of the book is a bit more anti-complacency than maybe some of the direct expressions in it.</p><p>Yes, from a talent-spotting point of view, there are a lot of people in that book who got a huge advantage by not accepting what everyone else believed about the talent around.</p><p><strong>[00:02:00] Dan: </strong>Yes. I think if you look at the marketing for the book or why some people might pick it up if they see it on the shelf is, "Oh, I'm going to become a late bloomer and I'm not really trying anything." Then I read it and I say, "Oh my gosh, wait a second. This is not what I thought of it. It's not a 10-step process. This is hard."</p><p><strong>[00:02:18] Henry: </strong>In a lot of interviews, people are like, "The book was great, but can you tell me the 10 steps for me to become a late bloomer?" I'm like, "No, that is not how you become Toni Morrison. I'm very sorry that it doesn't just happen." It doesn't. You have to compromise on these things because otherwise, it's not a book. It's just a rant about why stuff is hard. There is a lot of truth to the optimistic view.</p><p>I think it's very important that this is why the book does this. It combines the hard work message with the message that like, "Yes, but if you do that, it can happen." The biggest enemy is just not doing it. Like, "Oh, I'm 50, I'm 40, I'm whatever. I'm already doing this." Then you'll just never know. All this stuff we're talking about won't even come into play. You lose everything. You don't try and win. All that kind of thinking.</p><p>I think as a society, as a culture, we've become very like, "Oh, the grass says that it's like this." Whereas in the past people would've been like, "I don't know man, God told me to cross an entire continent and make a new town so I'm just going to do it." If we could just meet in the middle with these two approaches, because there's a lot of value in the crazy, guys just do it and we'll get something out of it view.</p><p><strong>[00:03:38] Dan: </strong>What do you think is actually the greater force? I also read in two forces that it seems like hold late bloomers back. One, there's cultural structures as you talk about. I think one of the biggest observations it's a note to organizations that, "Hey, you should be looking outside of your organization to identify better talent." Then it's also a part individual beliefs. I view these as separate. One is to the cultural institutions, and talent spotters, the other is to the individuals themselves.</p><p>What do you think is actually the greater force holding late boomers back today?</p><p><strong>[00:04:14] Henry: </strong>I think it will vary by the individual a great deal. Some of the people I wrote about in the book at the moment when it was made clear to them that you can try this now. This is your moment. Some of those people, honestly, to me it feels a bit like something in a children's story. The call to adventure came and they said, "All right, let's go." In a way, they thought that it was the surroundings that had held them back. It was, but what they came to realize was that they could just get up and do it.</p><p>Whereas other people like Malcolm X, clearly the institutional and societal structures were a huge, huge problem for him. I think there have been a lot of late bloomers in that position and there probably still are. I know that we've paid a lot more attention to diversity and inclusion and we've tried to change some of these structural things, but I think there's a deeper lesson, which is whatever your assumptions are about who is good at this, those assumptions will stop you from seeing the potential in someone who doesn't fit your archetype.</p><p>Now, we're much better than we were at trying to not make those assumptions about women or about people of color or whatever. I'm not saying we're great at it, but effort is been made to change that. That doesn't mean we've all become like Merlin and we'll always know when we meet Arthur. That's not how it is. We all work with sets of assumptions, models of the world, frameworks of what good looks like in this job. When someone comes along who isn't an obvious pattern match, I think our ability to assess whether they would be good drops quite significantly.</p><p>One example I have of this, anyone who's worked in a corporate environment knows this. If you get hired to do job A and you've done it for 18 months and you express an interest in moving teams to job B, there'll be a program and you can have coffee with the manager. They'll be like, "You are really good. We want to keep you in the business. Let's move you," blah blah blah. If you do job A and you apply somewhere else and you apply to them for job B, they'll be like, "Oh, CV requirement of three years experience, blah," whatever bullet points, whatever.</p><p>It's like, "Wait, how stupid is this?" Now, obviously, it's not that stupid because they know that-- If you've worked there, there is institutional knowledge and you can make an assessment and not everyone gets moved over. It does show you if we can find a way of assessing people differently, we might be able to unlock potentially in ways that we currently cannot. I don't see a lot of effort being put into those new forms of assessment.</p><p>Maybe AI will change that. Maybe better use of networks will change. Maybe we'll unleash ChatGPT across LinkedIn and it will be like, "Oh, here are loads of weird matches that actually might work out really well that you've never been able to--" You know the way that artificial intelligence told us, actually, you're playing chess wrong. Maybe it'll do something like that. At the moment I just feel like there are these obvious times when someone who's not qualified can have the job, but only if they're already in.</p><p>To me that suggests there are people outside your business that you would do well to hire that you can't spot. From that, we should say we're not as good at this as we think.</p><p><strong>[00:07:28] Dan: </strong>Do you need to believe in the great man theory of history for your book, the message of your book to fully make sense?</p><p><strong>[00:07:33] Henry: </strong>The great man theory of history is another one of the so-called Straussian readings of my book. Yes. I drafted some stuff about that and I deleted it because I was like, "Everyone is going to tell me this will never get published." You need to be sympathetic to it. You need to understand that of the three or four different explanations historians offer that is one of them. It has some explanatory power. I don't know if it's 10% or whatever.</p><p>We tend to hear from people like, "That's wrong. It's one of these other theories." I'm like, "Clearly many times in the world it is a large part of what happened." I think we're living through that right now. I think we live in the age not of heroes, but of anti-heroes. We don't worship heroes anymore. We hate the other guy's hero. That's a huge part of modern culture. One of the things the book is saying is take that theory more seriously, but in a very indirect way because already people listening to this are like, "Oh my God, he's an idiot."</p><p><strong>[00:08:34] Dan: </strong>Here's something I think that you do well that I view is really hard and that is assessing the motivations of successful people. To me it seems like people are often, they're not always able to even articulate, or sometimes they just are not totally straightforward about what is motivating them deep down. One example that struck me is, you pulled out the example of Larry Page being motivated by Nikola Tesla's commercial failures. Larry Page then grows up and he's really concerned about, "Okay, I'm not going to fail commercially. I have this great technical background and I'm going to put it to use." How do you figure out what actually is motivating people when you're analyzing their careers?</p><p><strong>[00:09:15] Henry: </strong>I don't know if I have a good methodological answer for that. I'm really pleased you've noticed that actually because I think that's very important. I think what I do that other people in this space might not do is I pay a lot of attention to the imagination because I'm primarily a literature person. I believe very strongly that the things that you absorb imaginatively as a child have a lifelong effect on your interests and therefore on your motivation.</p><p>Whatever it is that entices you, everyone will remember something. A picture of the woods, what's in the woods? Should we go into the woods? Do these woods look scary? What do they look like? Maybe there's an adventure in the woods. There's something, some film, some book. This is what happened with Larry Page, basically. His imagination was gripped by-- There's that biography of Tesla that he read. The famous one with the famous ending, where it's like Tesla was just this anonymous guy on the streets of New York and everyone was bumping into him and they had no idea he was the greatest genius in the world.</p><p>What a powerful ending. What a fairytale ending. There's Hans Christian Andersen quality to this, to the darkness of the way that biography ends. It's really well done. This clearly gripped Larry Page in the way that-- I'm always just talking about Merlin and Arthur, but it's got that quality. I think it's notable too that Tesla himself, this is freaky, he memorized the whole of <em>Goethe's Faust</em> and he could just recite long chunks of it whenever he felt like it. As a child, he was a big reader, big interest in poetry.</p><p>When he came up with the solution for the induction motor, he was walking in the park reciting <em>Faust</em>. This is the same thing as with Paige. This thing gets into you when you are young and it just shapes your vision of the world. A lot of people who write about talent are much more rational, much more statistically oriented. They're all on that side of things. I'm less good at that, but I enjoy looking at a person and thinking, "What is exciting their imagination?" I think, whether we know it or not, this is very important to us, how we work.</p><p><strong>[00:11:29] Dan: </strong>What's the right way to think about early success on late bloomers? I was actually really surprised and never thought about it this way, but you classified Taylor Swift as a late bloomer. The idea here is, I don't know if this is tongue in cheek or whatever, but the point you made was basically, if you look at her sales, she was famous for what she does today 10 years ago. Her real, real Beatles level stardom occurred quite later in her career.</p><p>Here's my take on it, I'm just curious what your framework is. I feel like it's dangerous to have modest success in the thing that you don't really want to do. That modest success, it'll keep you away from your passion. What is the right way to think about early success? Taylor Swift had it, and then she could still become a late bloomer. How dangerous is early success to someone who wants to one day do something they're really passionate about?</p><p><strong>[00:12:20] Henry: </strong>You mean you wanted to do something, but you ended up becoming a McKinsey consultant, and because they promoted you, you just stayed?</p><p><strong>[00:12:28] Dan: </strong>That's what I'm getting at. That's probably the canonical example. You're working at McKinsey, but you want to be a poet.</p><p><strong>[00:12:34] Henry: </strong>There are poets who've worked in corporate environments. I found Taylor Swift interesting because I think it was <em>The New York Times</em> did a great piece and they graft when she'd had her number ones and stuff. I don't remember all the numbers, she's had like 27. She's had a lot of number ones, and four fifths of them have come in the last few years. In the same period of time that the Beatles were running for, she got three and they got loads.</p><p>In the same space of time that the Beatles changed music forever, she got three number ones and she's just sung pop star. She's the country singer, she's done fine. She's doing well. No one's looking at that saying, "Oh, Taylor Swift's going to become the first billionaire musician. She's going to do The Eras Tour, whatever." It takes her, is it 15 years, to go from her starting out to what she is?</p><p><strong>[00:13:33] Dan: </strong>Probably at least.</p><p><strong>[00:13:34] Henry: </strong>Right. Maybe a little more. For a pop star, that's quite long. I was saying she's a late bloomer. It is a bit tongue in cheek, but actually, the graph is very striking. It's amazing to me that she's an outlier in that sense. I think Mozart is a late bloomer in a similar sense. I'm not saying the 10,000 hours literally, but if you say, "When did he start his 10,000 hours?" Rather than, "Oh, he was really young and therefore a genius."</p><p>You say, "He starts doing his 10,000 hours of composition practice when he is eight or something." He doesn't start composing the music that we still listen to until years and years and years later. It actually takes him a long time to start writing the music that we think of as Mozart. If I played you some of the early Mozart, you'd be like, "This is dull." That's not what they put in the adverts or in the concert halls.</p><p>Taylor Swift in a similar way, she's going for so long, and then suddenly, it's like, "Whoa, you just became Taylor Swift." I don't know if early success in the "wrong thing" can get in the way of that at that level. I do think that there's a different one level down maybe, where you can get caught in what's called the competency trip, which is that you spend your 20s learning how to do something and that's just hard and difficult and you have to be embarrassed in public.</p><p>You show it to your boss and your boss goes, "What are you? An idiot? I didn't tell you to do it like that." Then by the time you are good, you're like, "I can just turn up, do stuff, not be embarrassed." This is easy. Then if I say to you, but really you want to change your career and you should do that before you get old and lose your teeth. Your whole instinctive reaction is, "I want to go back to being 23 and feeling embarrassed about everything I do. That's the trap.</p><p>Now, with Taylor Swift, maybe what you would say is on that model, she made a really big decision not to do country music anymore. She made actually several really big decisions to reinvent herself commercially, artistically, whatever. Not guaranteed to work because audiences want what they want. Also, I don't know, presumably very, very difficult for her. I think those things are the mark of success.</p><p>I'm undecided on this question. If you allow yourself to get caught in the competency trap, does that just mean you didn't want it enough? I don't know. Different margins for different people. I think some people get into the competency trap, but they are still daydreaming about the thing they didn't do.</p><p><strong>[00:16:06] Dan: </strong>I see.</p><p><strong>[00:16:06] Henry: </strong>Some people aren't, and maybe that's the difference. My message is, if that's you, you should think seriously about how you deal with it.</p><p><strong>[00:16:15] Dan: </strong>The common reader, let's say, your personal definition of it, are they a dying breed? Are we at risk of losing the common reader altogether? Or do you think we'll see an increase in literary interest in say the next 20, 30 years?</p><p><strong>[00:16:28] Henry: </strong>Oh, I think we could easily see an interest increase, yes. I think one of the things AI will never be able to do for you is read a great book. We're going to move into this fantastic world where it can deal with all the dross in your inbox and auto reply to the family WhatsApp group and whatever. Make lower life of work easier and do your coding. It will never be able to read Anacreon on your behalf.</p><p>There's no summary. There's just nothing it can do. Either you've read it and allowed it to just completely demolish you, or you haven't read it and everything I'm saying you're like, "What is he talking about? That sounds weird. I don't know what that is. There's no middle-- AI-- You could be like, "I just read this page and I don&#8217;t know what it means. AI is great at that, but AI will never take over that function for you. I think it will rise in value, being one of the things left that's really, really worth doing on your own.</p><p>I'm seeing more people say that offline is the new online. There's a great post on Catherine D's blog about that today. I don't know if that is going to mean reading more books as opposed to going to more hangouts or whatever, but I'm optimistic that there's going to be a comeback. I also think the narrative is very much focused on fewer people are reading English literature at Harvard. If you take a bigger view, lots and lots of people on Substack, on Interintellect on the Catherine project, they're reading the great works. The humanities are okay online.</p><p>There was that great news story about a book club somewhere in America, in the United States. I think that's right. They'd spent 20 years reading Finnegans Wake, and really going through every word and saying, "What does this mean?" Oh my God. They enjoyed it so much. They said the next book for the book club is Finnegans Wake again. Which to me sounds, I have so much respect for that crazy level of devotion. I couldn't do that.</p><p>You see these things coming out, and I speak to people and they want to read. I'm optimistic, but I do think that right now, everyone's very down on the whole and we need to have more people saying, "Read a book, you'll feel better."</p><p><strong>[00:18:39] Dan: </strong>To your point about college students, let's say, maybe are not as interested in majoring in English literature, but the common person or the common reader we believe is alive and well. How do you think the internet has impacted the demographics of these people? What are they doing for day jobs?</p><p><strong>[00:18:54] Henry: </strong>The people who I interact with on my Substack come from a very wide range of, as far as I can tell, that they do skew slightly-- I don't know that I've got as many 20-year-olds as people in their 30s, 40s, 50s, but I don't get to see everyone. I don't always know who I'm online. The people, as far as I can tell, they have a very wide range of jobs and backgrounds. They're geographically diverse. I don't think there is a demographic. I think a lot of people are just interested in reading.</p><p>That's why I'm most optimistic, because I don't see it as being-- You hear a lot of this stuff like, "Oh, the only people who read fiction are women between the ages of 25 and 42 or something." I'm like, "I'm sure that they are the biggest group of people who purchase novels," but it's a stupid thing to say. That's not the data I would choose to rely on when I was saying, who are the readers? We're never going to know who's picking up a book at home, who's going to the library. We don't know.</p><p>I think it's very diverse and I'm quite hopeful about that. I think Substack is really, really good for this.</p><p><strong>[00:20:04] Dan: </strong>I strongly agree. I don't think my interest in literature or any of the arts would be 50% of what it is without the internet. Between blogs and good reads Twitter, I just don't even know how I'd be discovering this stuff, or knowing that there's other people that are interested in it.</p><p><strong>[00:20:18] Henry: </strong>The search costs have gone way down. AI is good at giving you reading lists as well actually. It got really good.</p><p><strong>[00:20:24] Dan: </strong>It's really, really good. It's very good. If you say, hey, I'm interested in a specific book, and you say what you like about it, it'll give you 10 more. It is shockingly good.</p><p><strong>[00:20:36] Henry: </strong>That's what I mean. AI can be a really good reading companion, help you with all that stuff, so it's making it easier to find out what to read, but it will never do the reading. Put those two things together, I think compared to some more technical stuff where AI might just take over a lot of what you do, it's more of a compliment here, not a substitute.</p><p><strong>[00:20:57] Dan: </strong>What did you see in <em>Rear Window</em> that gave you an appreciation for film?</p><p><strong>[00:21:00] Henry: </strong>I can't believe you found that out. When I was young, I did not enjoy film so much. We went to the movies and I just thought movies were bad. This was the '90s, so a lot of movies were quite bad. I saw <em>The Matrix</em> and I thought that was exciting, but I thought <em>Jurassic Park</em> was what the movies was, which is great. It was great. It's a fun film, but it's not art.</p><p>Then I was sitting in-- This is a terrible story. I was at work and we had the TV on because it was Westminster, you always have to either have the live stream of Parliament or the news on in the background. I was moving between the two channels and I saw a shot from <em>Rear Window</em>, which I obviously couldn't just be like, "Oh, we'll just leave this film on this list."</p><p>There was something about the composition of the shot that-- partly I love Technicolor, but there was something about the composition of the shot that it had that thing I said about the woods, like, wait, what is that? There's something in there, there's a mystery here. It could have been a framed picture. This was the moment when I was like, "Oh yes, there are movies that are aesthetically beautiful." I was just really, really compelled to find out what that picture was that I had seen and to open up the world behind it.</p><p>I did, and I was very lucky because that of course happens to be one of the all time great movies. Just truly, what a remarkable film. That just sent me off. That just sent me off. I think that's really important, finding art stuff artistically, where you just look at it and you say, "What is going on in there?" Which is so strong in children, and we let it lapse I think. It sounds immature for me to say it, it doesn't sound intellectual, but I think this is how art begins. This is how the appreciation of art begins.</p><p>Frankly, the great movie directors spend a lot of time perfecting those shots so that it will have that instinctive reaction in you.</p><p><strong>[00:22:58] Dan: </strong>What advantages do you think literature has over film or even music or the visual arts?</p><p><strong>[00:23:04] Henry: </strong>There is a quality of prose that cannot be intermediated, which is why when you watch an adaptation of a Jane Austen novel, you lose almost everything about the story because it just becomes a plot. Now she's good at dialogue, so you get some good dialogue, although a lot of adaptations don't follow it. There is something about the ability of prose to give you access not to the outer world of the story, but to the inner world of the story that can almost, I think, never be replicated in other ways. That's not true of all literature, that's not all that literature can do, but that's the first thing.</p><p>The second thing is that literature relies on you to do a lot of the work. When you start reading about a night riding through a forest, or a young woman sitting in a window feeling trapped, you have to put it all together in your mind. Whereas in the film or whatever, it just gives it to you. I think that's why television, frankly, is often just so boring. Even the so-called Golden Age of Television.</p><p>I think a lot of music can have the same quality, but it's a much more difficult language to work in for a lot of people. It takes a long time to get used to it. The joy of prose or poetry is that you speak that all the time. Your parents told you stories and it was the same thing. The book is just an extension of that, whereas music is a whole different realm of understanding. I think that's the main advantage.</p><p>There's something mystical about language. Oh, I sound very vague and waffly, but there's something about the sound of it. I read <em>The Hobbit</em> out loud to my children recently, and it-</p><p><strong>[00:24:56] Dan: </strong>Oh fun.</p><p><strong>[00:24:57] Henry: </strong>-so much fun. I will never forget this. I was reading it for myself and my daughter said, "What is that?" I said, "Oh, it's this really good book. I'll read you the first page." As soon as I said, "In a hole in the ground there lived a Hobbit," she's like, "A what? What?" Now that's a simple sentence. Why would it be so enticing? Part of it is the meter of the sentence, "In a hole in the ground there lived a Hobbit." This is a kind of fairytale cadence. It's almost like hymn meter. It's got a lull to it, which it's a much more subtle version of, once upon a time in a far away land.</p><p>Literature can combine those sounds and rhythms with the meaning of the words in a way that no other art form quite can. It's actually quite remarkable how that one sentence, presumably it's conjured up something in your imagination. Very basic words, but it starts to happen. Then you own that, and that's yours to be in. You can always just think to yourself about that hole and that Hobbit, and at the highest level of appreciation, you can then write your own book that's been influenced by that.</p><p>Presumably the visual arts do do this in a similar way, but because again, it's not how you speak to people when you buy your food or go to work, it doesn't have the same day-to-day directness. I think literature is precious to us from when we are young in a different way. Lots of films are precious to us, but it's in a slightly different way. We have to give words to things before we can properly know what they are.</p><p><strong>[00:26:41] Dan: </strong>You have a piece called <em>Nurture Your Imagination to Cultivate Anti-mimesis</em>. In this, you describe art as a tool to become anti-mimetic or basically not do what everyone else is doing and break out of the mold. I think it's applied to just general career advice, but I'm curious, just more broadly, what do you view as the purpose of art? I get the sense that you believe there's something beyond career advice or building a startup that's useful about it.</p><p><strong>[00:27:08] Henry: </strong>I think that our lives are a quest for meaning, and that to be an individual is to constantly be on the search to make your life meaningful. That art is the nearest thing to life. Art gives you the raw material for going on that quest, for seeing different things in the world. I believe that imagination breaks the path that reason follows. That it's not that you have to see it to believe it, you have to believe it to see it. This is what art does.</p><p>We forget. When we grow up, we forget how much of what we want and know about the world comes from our imaginations, and we neglect them as we get older. Some people do. It's a really important part of being who you are. There's a wonderful-- Do you know the poem <em>Sir Gawain and the Green Knight?</em></p><p><strong>[00:28:00] Dan: </strong>I doubt. No.</p><p><strong>[00:28:01] Henry: </strong>It's great. It's an Arthurian story. It's very good. There are lots of modern translations, so you don't have to slog your way through middle English. It's a really good story. Arthurian Knight goes on a quest, fights a guy, come, standard thing. There's a great commentary on that by John Burrow, and he says, "Ultimately the quest is all about constantly going out and rediscovering yourself and rediscovering the world."</p><p>Now we are all doing that all the time. We do not shut up about, I'm on a journey with this. My career journey, my spiritual journey, my personal growth journey. I'm traveling to find myself. We are questing. I feel like some people believe they're questing every day on the commute on the way to work. It's so fundamental to the way we talk about ourselves. The 20th century, was built on the idea of the quest. Mass migrations, space travel, the invention of the airplane, the invention of the automobile. It goes on and on and on.</p><p>We are still living in that world. It is fundamental to the idea of modernity. It is the great inheritance of romanticism and freudianism. We are constantly looking to expand ourselves, to get out of our comfort zones, to be our best self. It just goes on and on and on. Literature is the best way you have of imaginatively expanding yourself. It's not practical or immediate. There's no 10-step plan. It's not self-helpy and so it suffers, I think, in contrast to some of the easy promises that are made in modern culture.</p><p>It is over the long term, the most rewarding and the most effective way of living with your mind in a way that is not boring. You are stuck with your mind for your whole life, why wouldn't you furnish it? Wallace Steven said, "We say that God and the imagination are one. How high that highest candle lights the dark?" Is most of what people are reading to help themselves get through their lives, lighting the dark. I don't think so. George Eliot and Shakespeare and Jane Austen will light the dark for you.</p><p><strong>[00:30:08] Dan: </strong>Do you think that artistic themes wear out? Dostoevsky writing on existentialism, Kafka and absurdism, is there's still room to explore those older ideas? Could there be a modern writer who comes in and says like, "I'm just going to be the next Dostoevsky? Or are those things pretty much worn out?</p><p><strong>[00:30:24] Henry: </strong>Well, Kafka was the next Dostoevsky, and then Borges was the next Kafka. Then as far as I can tell, Borges has influenced almost everyone who's written a book since including fantasy writers like Susanna Clarke. I don't know that the themes wear out, but the treatment of the theme does for sure. This interacts with the idea of influence. There's a wonderful book called <em>The Burden of the Past on the English Poet</em> by Walter Jackson Bate.</p><p>His theory is that with romanticism comes the importance or the predominance of individual creativity. It's no longer good enough to write a great poem in the style of the Latin obes or whatever. You have to be your own guy. This is obviously a simplification of the thesis, but that's the general switch that we get with romanticism. When Wordsworth comes along, he says, "Oh, I can't be the new Milton. That's not good enough."</p><p>He's forced into becoming this inventive thing of being Wordsworth and writing about<strong> </strong>death. Then Keats comes along and the next generation and says, "Oh my God, I can't be Milton and I can't be Wordsworth. What am I supposed to do now?" He creates the second wave of romanticism. Then Tennyson comes along and says, "You have got to be kidding me." He creates a kind of post-romantic victorianism. This is what happens.</p><p>I think that is probably a bigger part of why these things change, which is the Kafka looked at Dostoevsky and basically thought, "Well, I'm too late. He's done it." That then becomes, you go through the valley of the shadow of, I'm never going to be a great artist, it's already been done. You come up on the mountain of Whoopi, "I've discovered a way to spiritually destroy my influence and reinvent myself." Shakespeare did that. He did it with Marlowe, Philip Sidney and Ben Johnson. Dante did it with Virgil. It goes on and on.</p><p>I think that's probably more like it. Then what that gets coupled with is the bigger culture. There's a lot of angst in the Dostoevsky, Kafka period, because the world is in an anxious place. Modernism can be born out of the ruins of the First World War because everyone is just psychologically distraught at that point. Whereas someone who starts writing in the 20s, and so like a younger generation, they might have more of a fresh young things view of the world. That's like Evelyn Waugh.</p><p>Then the novels, he starts writing after the Second World War equally become a bit darker. I think it's a combination of each generation of artists comes along and has to differentiate themselves and they find themselves in a culture that's significantly different to what came before, which is why technological and social disruption is so coupled with the emergence of the greatest art in my view.</p><p><strong>[00:33:22] Dan: </strong>I have a question for you, and this is, you can psychoanalyze me, maybe this is a me problem. For some reason, to me, when I see books or television or movies that try to write about social media or the internet, I always get this feeling of tackiness. I think movies seem to do this, worst of all, I don't know what it-- Just the way they portray text messaging and social media. It either immediately feels outdated a year later, or it doesn't feel quite right.</p><p>When I think about it, this doesn't actually make sense because at any point in history, it would be weird to watch a television show that didn't reference that television or the radio existed. They're just part of normal life. When we watch old movies that portray television, we just think, "Oh yes, that's what they were doing at that time." It feels weird to see the use of cell phones in the internet in just modern movies. I'm not sure why that is. You had this take that you've called for anyways, someone to write a great novel about the internet. What's going on with my intuition? Why does this feel tacky and why does this feel hard?</p><p><strong>[00:34:25] Henry: </strong>No, I think you are right. I think the problem in TV and movies is that when you send someone a text message, it is very alive in your imagination what that message means. A bit like telegraphs, we can send each other one little picture, a string of letters and it means a lot, it means a lot more than is just literally what's being said. Depending on who you are texting, it has a kind of resonance between you.</p><p>Maybe it's a shared joke, maybe it's just something you say, but when you just put that in a text message on a movie, it is just like some letters and a smiley face. We're not in on the deep resonance that it has. That I think is the problem I referred to earlier of the movies can't use prose to get inside someone's head. There are novels I think that have done this very well. I think Sally Rooney is excellent at this. I think that's one of the reasons why she's so popular.</p><p>I haven't read lots of modern fiction, but I haven't read anyone else who's had characters use MSN messenger and it really, really, really be effective. Or I think is it in <em>Beautiful World</em>? She depicts a character sitting on the bus, looking at the map on the phone, watching the dot of the bus move along the map. These are very, very good observations about what it is like to live with the internet. That's I think as close as we've come. No, I agree with you. It's a huge challenge.</p><p>The one other reason might be that we just take it too literally. The best story about radios is, I think it's called <em>The Very Enormous Radio</em>, or <em>The Enormous Radio</em> by John Cheever, which was written in the '30s maybe and it's really bizarre. Cheever he often verges on the phantasmagorical, just verges on it. I don't know if you know the story, but I won't spoil it, but what happens could not literally happen, but it's a very, very good story about radios and our responses to technology.</p><p>I think part of the problem might be that we take social media very seriously and very literally, and therefore we're not writing about the way it lives inside our minds, the way it lives inside our imaginations. We're just writing sentences that say, "She picked up her phone and tapped out a message." When you're like, "Oh God," that's just boring. Who cares? Did she then rummage in her handbag?" You can string this stuff together for hours. It doesn't matter whether it's social media or not, it's just boring.</p><p>There's something about Sally Rooney has done it, but in general, we have not yet captured the magic of what it feels like to send a text message.</p><p><strong>[00:37:17] Dan: </strong>Yes, that's a really good observation. Do you think there's a region or country right now that is producing the best literature it's ever produced?</p><p><strong>[00:37:25] Henry: </strong>I'm not enough of an expert to say that, but there's a general feeling that Latin American novels right now are really good. I read a couple of them, I read a couple of the ones that were on the International Book of Shortlist this year, and I could see why people were saying that. I felt like the translations were not giving me everything about the book, but I did at least feel like I'm missing something really good. I could feel that.</p><p>There's a whole generation of Irish writers that seem to be very strong. I don't know if we can say they're the best, but it's pretty good, it's going really well. Again, I'm not super well read in Japanese fiction, but I've read a few modern Japanese novels and they've been great.</p><p><strong>[00:38:12] Dan: </strong>Okay. Another personal question. Probably one of my favorite parts of your blog is there's just so many new authors on it that I don't see-- I see a lot of books come across my Twitter feed, my RSS feed, but I still discover more on your blog than all the other sources combined. Luckily the subset search function works pretty good, so you can just check and see who you've written about. I noticed there's a couple of my favorite authors that you don't talk about much at all.</p><p>I just want to get your take on what you think of them or why you haven't written on them. Knausgaard, Houellebecq, Ferrante and Bola&#241;o. They have similar themes, but I'm not sure that there's some thread that ties them all together perfectly. Just curious, do you have any thoughts on those authors or why you haven't written about them much?</p><p><strong>[00:38:54] Henry: </strong>Very embarrassing. In general, the answer is I haven't read 'em. I spend a significant amount of my time not reading modern literary fiction. It's a slow crawl for me to get there. I have tried now Knausgaard and Ferrante and I did not enjoy them. I think I have copies of both of them here and I intend to go back. Same for-- <strong>[00:39:19]</strong> In fact, no, that's true of all of them. I have copies here and I intend to go back. Other than <em>Submission</em>, I haven't had a great immersive reading experience with any of those writers.</p><p><strong>[00:39:28] Dan: </strong>Great. Very self-indulgent question.</p><p><strong>[00:39:29] Henry: </strong>No, that's good. More questions like that. That's great.</p><p><strong>[00:39:32] Dan: </strong>How should we think about someone-- I'm thinking about when you are analyzing a piece of literature, you have this really good piece that talks about, it warns against what you call a weak misreading. The idea here is you're injecting your preconceived ideas onto the work of art, and you're not really thinking about, how did this piece of art become what it is, is I think, the phrasing that you use.</p><p>How should we think about someone like, say, Ren&#233; Girard, who that's pretty much all he does? Is he takes this one grand theory, and then he goes and he applies it to Shakespeare and Proust and other works of literature, and then he says, "See, I am analyzing the human condition with mimesis." It seems like that's as much of a weak misreading by your definition as you could possibly get. What's the right way to understand what he's doing there? Is he weak misreading or is he doing something else?</p><p><strong>[00:40:20] Henry: </strong>Let me unpack a couple of things. The piece you talk about is very much not aimed at critics like Girard. It's aimed at ordinary readers to say to them, "Look, whether you know it or not, we all have preconceived ideas about things." I run Zoom calls with people, book club Zoom calls. What I see again and again is people saying, "Oh, this bit of the book is about X, Y, Z," and X, Y, Z is just their preconceived notion of the world. I'm like, "Look at all these examples on the page. The book is screaming at you that it's the opposite of what you think, but you are pattern matching to what you already know." It can be hard work to actually let the book teach you what it's trying to say.</p><p>This is why everyone thinks that Robert Frost's poem <em>The Road Not Taken</em> is saying, "You should take the road less traveled, and then you'll be great," whereas it's in fact saying the opposite because we're all constantly waiting to hear that the happy message. That's the warning, the weak misreading phrase I stole from Harold Bloom. I agree. I think it's a good phrase. Girard I think is not guilty of this because he derived his theory from literature. When you read his literary criticism, it's quite compelling. The one that really has stayed with me is his reading of <em>Midsummer Night's Dream</em>. When I read that, I found it impossible not to believe to some extent certainly that is a play about the mimesis for sure. He was close reading the details of the language and it was very persuasive.</p><p>He first derived the theory out of-- I can't remember all the novels, but <em>Underground Man</em> by Dostoyevsky, Proust, Stendhal, people like this. Again, it's I think quite a persuasive reading of many 19th century novels. It's certainly a very persuasive reading that this is what some of those authors are up to. This is one of the driving ideas in their book. I do agree with you that there comes a point when this is your grand unifying theory of life, the universe and everything. Therefore it can be a bit like-- I don't think everything is always about this, but in general I think he sort of stands slightly outside the point I was making because he's derived so much of it through these careful readings. It's a horseshoe, I guess. You come to it with no knowledge and you just see yourself in it. By the time you've got so far around that you've created your grand theory, it is a bit Freudian. It's like everything is wish fulfillment, everything is-- Just hang around in the middle.</p><p><strong>[00:42:55] Dan: </strong>What do you think Harold Bloom's worst take is?</p><p><strong>[00:42:59] Henry: </strong>I haven't read the book, the one about <em>The Book of J</em>, but that is commonly cited as the point when he completely lost it. I theorized that bits of the Bible were written by a woman of the court of Solomon and all this sort of thing. In a funny way, I think that <em>The Western Canon</em> was slightly misguided. I used to be like super Bloomian, and I do still think there is too much critical theory and too much made that literature has to be put to some sociopolitical use. Overall, I think the idea of extending to <em>The Canon</em> and adding to it has been positive. Personally, I've read lots of authors that I hadn't previously read and thought, "This was really great. This was great literature. I'm so glad I read this."</p><p>I can't see why anyone would have a problem with adding to <em>The Canon</em>. On the narrow question of some of those arguments he made, he was probably right. Polemicizing it and so forth made him this wonderful figure in the culture wars, but probably tipped him over the edge in terms of the long-term usefulness of what should <em>The Canon</em> actually be. I think that was his aim though. I think he saw himself doing that, but maybe in a funny way, his best take was also his worst take, right?</p><p><strong>[00:44:15] Dan: </strong>Yes.</p><p><strong>[00:44:16] Henry: </strong>The other irony of Bloom, of course, is that he was great on television. Spent his whole life saying, "Stop watching TV, get off the internet. It's going to destroy your brain." You're like, "Harold, you're a TV star. This is your whole life."</p><p><strong>[00:44:30] Dan: </strong>Here's a quote from you that I'd like to get an understanding of what you meant by it. Your piece, how to have good taste. This is one of my favorites, by the way. You write that, "I disagree that capitalism and the internet make it harder to have good taste." Why is that?</p><p><strong>[00:44:43] Henry: </strong>Because it lowers search costs, because it broadens the range of what you can discover, because it makes it easier to get access to the most well-informed critical opinions, because conversations like this are possible. It just enables you to take things seriously for much lower costs than was the case. You're much less limited by the zone in which you happen to begin. I think that's the number one factor. It's so much easier now to be like, "What are the 100 greatest movies of all time, and can I just watch them cheaply, please?" So much easier.</p><p>Now, I agree that it probably also makes it easier to have bad taste, but I think some people conflate their dislike of "late-stage capitalism" and the stultifying effects of social media with this issue. It's like I don't think it's an inevitable thing. I think it's just splitting people out in different ways.</p><p><strong>[00:45:48] Dan: </strong>In your view, you've reviewed Tyler Cowen's <em>Stubborn Attachments</em>, which is why I thought you might have a point of view on this, but do you think there's any direct causal correlation between economic progress in the arts? In other words, if GDP explodes, would you expect the output of good art to predictably go up or down, or is GDP itself and economic progress totally unrelated?</p><p><strong>[00:46:16] Henry: </strong>I do think that the arts flourish when there's a good market. That is partly because it gives them the technology to do different things. It gives them the money for the artist to spend their time producing the art. It also in general gives them a better audience. One of the biggest challenges for interesting new art is to get anyone to care about it. The basic answer is yes. I do think that's true. Let's take the example of Shakespeare<em>. </em>Shakespeare emerges at a time when you first get indoor theaters. There's a lead over production from the grammar schools, and Elizabethan England is getting rich and has a general sense of itself as like, "We're going to go out into the world and do great things."</p><p>That's why we got Shakespeare. If you take one of those planks away, it becomes much harder to sustain his output, much harder. Also, just in general, wouldn't any economist predict that if GDP doubled, the output of good things would double or more than double?</p><p><strong>[00:47:19] Dan: </strong>That's the theory.</p><p><strong>[00:47:21] Henry: </strong>I don't see any reason to disagree with the economist just because I happen to enjoy reading poetry.</p><p><strong>[00:47:26] Dan: </strong>You've commented just over a year ago on how useful ChatGPT was in writing your book and a post that just outlines all the different ways that you use it in your research. Over the last year or so though, since you've written that, have you become more or less bullish on the usefulness of LLMs per your specific daily workflow?</p><p><strong>[00:47:42] Henry: </strong>For my workflow, less optimistic, and I'm thinking of writing about that.</p><p><strong>[00:47:46] Dan: </strong>Interesting.</p><p><strong>[00:47:48] Henry: </strong>I agree with the people who say the ChatGPT got lazy. There was a period when I found this quite irritating, and now I'm just resigned to it. I think perplexity is really good if you want to do searches. I think it's probably better than Google. I think as we were saying, I read <em>Ulysses</em> earlier this year, and I was putting chunks of it into ChatGPT saying, "I think it means this. Does it mean this? What about that word?" It was pretty good at that. It does reading list, but in general, the new one, 4-0 or whatever they call it, I think has improved some of that. I love the new Claude.</p><p>I've also come to feel that actually, as I said, because it can't replace the reading for me, it's administrative assistant, and for what I do, there's less margin than there is for some other people. I was very struck recently. I did an experiment where I just dropped two lines of poetry into it and I gave it two lines of <em>Shakespeare</em><strong>, </strong>and it made up its own two lines in return. They weren't great, but they were fine. I thought, "Oh, this is cool." Then everything else I gave it, it just gave me the next stanza of the actual poem. Word perfect. I was like, "I'm really going to test this." I got a random stanza from Book IV of <em>The Faerie Queen</em>, which is hundreds of pages deep in a poem that no one ever reads.</p><p>I put that in, and it got the next stanza perfect. I was like, "If that's too much on the never make a mistake, never hallucinate side, I can just Google that. Why do I need the LLM to do?" Now, I get that it's going to become a new operating system, and it's like the film <em>Her</em>, and that's really what's exciting. I don't have that new operating system. When Sydney love bombed that journalist, I was like, "This thing is great. This is going to be so cool." Now, it's like, "I already know the next stanza. I thought you were going to spin me a new yarn." If you say write a poem, obviously, it's just <strong>[unintelligible 00:49:52]</strong>. I do feel like it's become so sensible, slightly lost its uses. I'm still long-term bullish.</p><p><strong>[00:50:02] Dan: </strong>You did a late bloomer GPT for your book. When you publish your next book, will you do one of those again? What has the reception to that been like?</p><p><strong>[00:50:09] Henry: </strong>Not huge. I think the length of the book precludes it from being super useful but yes, I would do it for all sorts of things. I've made GPTs for bits of my blog and stuff. I just think it's here, we have to use it. Yes, I would always do that. As I said, I think they're iterating it right now in a way that's-- I don't want the sensible, never-make-a-mistake people to win. I want it to have a crazy setting. If it's just being really good at doing my accounts, that's obviously amazing. For what you and I are talking about, it should be able to come up with a really weird question. I bet it could if they let it, but I think they've been scared into putting it on tighter rails.</p><p><strong>[00:50:55] Dan: </strong>You have this list you wrote of seven books that you considered foundational when you were very young. I found the list great and interesting. I'm curious, you listed Hayek's <em>The Use of Knowledge in Society</em> alongside Tolstoy, and Dickens, and Keats. What's Hayek doing on that list? How has he influenced--</p><p><strong>[00:51:11] Henry: </strong>I wrote the list straight off the top of my head out of nowhere. It's not a great list. Nonetheless, I stand by the Hayek pick for sure. I love that essay. When I was 21, I finished my English literature degree and I was like, "I know nothing about the world. I need to read some real stuff," because that's just been wall-to-wall poetry, and that's been great, but I need to get into some history, some economics, some philosophy. I need to really do this. I was reading John Stuart Mill, Isaiah Berlin doing online economics courses. When I read Hayek, I was like, "Oh my god. Yes, this explains it in--" what is it, 12 pages or ridiculously concise essay.</p><p>I think he's right. I think the information problem is real and I've seen it every day in my life. That unlocked way of thinking about things that I could start making sense of what economics is really about. I think it holds true as a model of a liberal international world.</p><p><strong>[00:52:19] Dan: </strong>I have on my personal site just a list of a couple of ideas that I considered foundational to myself, and that essay is on there. I'm a big fan as well. Let's talk about poetry for a second because you also note that at one point in time, that entire list would've just been poetry. What's the mindset you've had or the phase of life that you've been in where you're all in on poetry versus prose? What makes you prefer one or the other?</p><p><strong>[00:52:42] Henry: </strong>I don't know. I'm tempted to think that when you're 17, of course, it's poetry because the whole world is so intense. Everything is so intense. Now that I'm 37, obviously not everything will be poetry. I don't really know the answer. Hopefully, the real answer is I'm high in openness, and I want to keep exploring. If you read <em>Ivanhoe</em> by Walter Scott, this can really light up the imagination much in the way that Keats once did. I wonder if it's just a function of aging. Darwin was obsessed with poetry when he was young, and he said by the time he was middle-aged, he couldn't face it anymore.</p><p><strong>[00:53:20] Dan: </strong>What does the poetry of Keats mean to you?</p><p><strong>[00:53:23] Henry: </strong>I love Keats. I memorized whole poems by him when I was a teenager. Keats was romanticism as far as I was concerned. I later had a Wordsworth phase, but at that point, that's what I lived for. The density of his language is extraordinary. What he means to me now is to think back to when I was 17. I still read him, I still get a lot out of him, but that was what it was. I read the sonnets now more than the odes.</p><p><strong>[00:53:50] Dan: </strong>Back to your question on you think that people who read about the purpose of art, it's not as interesting to you. What about philosophy? Do you have a favorite philosopher?</p><p><strong>[00:53:59] Henry: </strong>It is interesting to me. I love philosophy, and it is interesting. I shouldn't say this. I don't think there's very much good aesthetic philosophy. My favorite philosopher, I have to say John Stuart Mill based on my recent output. Like everyone else, Plato was what shocked me into realizing there was that way of thinking about the world. I still don't think you can properly think about art and the nature of art in society without thinking about Plato. Maybe I would be forced to say it's Plato, even though the person I want to read is John Stuart Mill.</p><p><strong>[00:54:36] Dan: </strong>At the margin, what does the world need more of today? Great literature, great philosophers. In thet 21st century if you could pick, would you rather we produce one or two just canonical all-time great philosophers or one or two, just all-time great novelist?</p><p><strong>[00:54:52] Henry: </strong>That's a great question. I'm going to say novelist or poet or whatever. We need an imaginative mind, I think. It's the embodiment of ideas in art that matters most. I think we're suffering from so many theories, so many facts, so many philosophies, so many explanations, and no one has yet managed to embed all of that into a story about society and say, "This is what all of that means, people moving around and living their life and doing their jobs." Sally Rooney has tried to do that, but she's a Marxist and a determinist.</p><p>While it resonates with a lot of people, I'm not sure she's yet got to the George Elliot level of writing a <em>Middlemarch</em>, but I think that's what we're lacking. I think aren't we all exhausted with huge books that explain everything?</p><p><strong>[00:55:38] Dan: </strong>Maybe those are not great philosophers and the purpose is a book that explains everything. That's actually good.</p><p><strong>[00:55:44] Henry: </strong>Yes. I feel like we've had a good run at that, and would it not have come up by now? Who was the last great philosopher?</p><p><strong>[00:55:51] Dan: </strong>Gosh, depends on who you ask, right?</p><p><strong>[00:55:53] Henry: </strong>You see what I mean?</p><p><strong>[00:55:54] Dan: </strong>Yes, maybe Parfit.</p><p><strong>[00:55:56] Henry: </strong>Parfit, yes. That would be my answer. Where is the great novel of Parfit's ideas?</p><p><strong>[00:56:02] Dan: </strong>Probably be boring.</p><p><strong>[00:56:04] Henry: </strong>If you said though, "This is George Elliot. She's translated a long work of German theology about the fact that Christ wasn't divine, but that His ideas can still thrive in a humanistic culture, and she's going to write really long novels about that." You'd be like, "Get out," which sounds dreadful. As it turns out, <em>Middlemarch</em> is the best novel ever written, and everyone always says that when they read it.</p><p><strong>[00:56:31] Dan: </strong>That's fair enough.</p><p><strong>[00:56:31] Henry: </strong>I think we need a George Elliot more than we need anything else. I think there are lots of ideas out there. I also think this works the other way. I think the rationalists, the effective altruists, the utilitarians, libertarians, all of them. I'm sympathetic to all these people. Where is the art in this movement? Where is their ability to actually tell me what it will be like for society to live in this way? I feel like LessWrong has worn itself out because it has tried everything other than art. That's what I think we need. I also note that I'm going to talk about that movie, <em>Her</em>, again.</p><p>That's had a huge impact on the way people think about AI. I suspect it's had more of an impact than anything else for good and for bad. I think people are going to keep going to discover that film. I want more of that.</p><p><strong>[00:57:25] Dan: </strong>Do you think we should care about the morality of the person behind great art, or should we just care about the art itself?</p><p><strong>[00:57:31] Henry: </strong>I think you can care about it without letting it affect your view of the art. I think that's quite important. I don't agree with the people who say just divorce the two and don't worry about the biography because there's a great <strong>[unintelligible 00:57:45]</strong> Paul quote, where he's like, "Artists are really major significant people in our culture, and a good biography of their life might have more to tell us about the society we're living in than the art itself," which is quite a strong opinion, but I think it's worth giving a lot of consideration to. I think some of the great biographies of the past have been like that and have had very big influences on their cultures.</p><p>The revelations of the truth about the artist has been very socially important. I don't think what we should do is let that spill over into saying, "Oh, Philip Larkin was racist and now I hate his poetry and it makes me feel sick when I read it." I think that's just a huge intellectual mistake because you quickly then start reading books written by nice people. That's just not what we're here for. I want both, but I want them to be kept reasonably separate. Obviously, there should be some crossover and some cross-readings. Imagine saying the life of Beethoven is irrelevant for us to understand. That's insane. That's just clearly insane. That's obviously wrong.</p><p><strong>[00:58:46] Dan: </strong>It's the most interesting part potentially about it.</p><p><strong>[00:58:50] Henry: </strong>It's so important that we understand these things, but we shouldn't let ourselves get carried away. Also, because the bigger point there is if we then allow our appreciation of art to become partisan like everything else, and one of the benefits of art is that it can just steer clear of all that nonsense where it's so over-invested in politics these days.</p><p><strong>[00:59:10] Dan: </strong>Do you think nepotism is bad?</p><p><strong>[00:59:12] Henry: </strong>I think it can be, but we don't appreciate the extent to which it can be very, very useful, and it's in a very Hayekian way, a source of discovering knowledge that we cannot otherwise discover. I have a piece in the pipes about this, so I don't want to say, I don't want to give the whole thing away. Basically, if you want to assess whether someone is good for a particular job, there are personal qualities that simply cannot be measured and tabulated and compared, and you get the best information through something that looks a bit like nepotism.</p><p>That's why if you look around, everyone's always trying to find a mentor, a patron, someone who can do them a favor, someone who can give them a leg up. The secret is to do it in a way that's fair and based on promoting the right people, not to do it in a sort of White guy promoting White guy's way. We have to be careful about when nepotism is good and when it isn't.</p><p><strong>[01:00:07] Dan: </strong>You of course rate Shakespeare very highly, but I'm wondering, do you think that Harold Bloom overrated him?</p><p><strong>[01:00:13] Henry: </strong>No.</p><p><strong>[01:00:14] Dan: </strong>You agree with Harold Bloom? Do you think he underrated him? Would you go even further?</p><p><strong>[01:00:17] Henry: </strong>I don't know if you could go further because didn't he used to say that thing about there is only one God, and his name is William Shakespeare or something.</p><p><strong>[01:00:24] Dan: </strong>The title of the book is he invented the human.</p><p><strong>[01:00:27] Henry: </strong>There's a misunderstanding about that book because what he's doing there is playing off of a very well-known Samuel Johnson saying, which is that the essence of poetry is invention. Now, Johnson did not mean invention into the way we think of invention, partly because he was a classicist, and he believed in the mimetic theory of art, and he didn't want super, super original people. He wanted people who could show us very familiar things about the world in an unfamiliar way. When Johnson says invention, it is a twin with the word discovery, and discovery is one of the definitions that Johnson puts in the dictionary.</p><p>What he means is that Shakespeare is the first person who properly discovers the real depths of human subjectivity in art. Now, you still might disagree with that, but it becomes a much more sensible position when you see it that way, right?</p><p><strong>[01:01:20] Dan: </strong>Yes.</p><p><strong>[01:01:21] Henry: </strong>He was just using this 18th-century word without properly explaining himself, I think. I think it's quite a good book actually, the Shakespeare book. Some people think it's crazy. I think there are two things. He's constantly pointing to this thing about discovering subjectivity, and as John Stuart Mill said, the way that we overhear the artist, we don't hear the artist, we overhear them. Shakespeare says the characters in Shakespeare overhear themselves. That's really interesting. If you take a selection of the players and read them chronologically, you do see that right at the end of<em> Richard III</em>, Richard does a soliloquy and he overhears himself and he starts saying, "Oh my God. Oh." He starts interacting with himself.</p><p>By the time of <em>Hamlet</em>, this is the dominant mode. This is definitely something that Shakespeare "discovers, invents," and it becomes totally normal for us to think about ourselves in that way and to think about art in that way. I think that's valuable. The other thing is he'll just explain the play to you and then quote a whole page worth of stuff. I think he sensibly understood that very often he should just give space to Shakespeare so that you would come away from that book having read quite a lot of Shakespeare and understood it better than if you'd just read it on your own. I think it's great, actually. I think it's a really good book. The people who dislike it, they have very valid reasons. If you've just take it on the terms I'm saying, you can get a lot out of it.</p><p><strong>[01:02:46] Dan: </strong>You've commented on how Shakespeare's often performed poorly. If you were in charge of directing a play, what would be your top things where you're like, "We have to get this right"?</p><p><strong>[01:02:54] Henry: </strong>I was an amateur director, so I have a lot of vanity about my ideas of how I would direct place. There's some advice from No&#235;l Coward, which he meant as a joke, but not really, which is speak clearly and don't bump into anyone. I think that actually is the starting point. I think so much is put upon Shakespeare that if you can actually just work out what do these lines mean and speak them clearly, you are a long way towards doing a really good production. What I see too much of is very impassioned acting. When I go to the theater now, I feel like there's no sense of dramatic tension building up and falling down and building up again.</p><p>There's just lots of, "Everything is super intense all the time, and we're all running around the stage and yelling and going, 'Oh my--.'" I'm like, "Guys, no one's going to die for two hours. We need to build up. [chuckles] The apocalypse is not here already." I think that's the biggest thing, an appreciation also that we hear Shakespeare as much as we're watching. People always say he was written to be performed, but he knew that people made anthologies. He knew that people pirated his work in poetry collections, and he purposefully wrote lots of stuff that would be put into those anthologies. He was written to be read as well. We should give the audience the benefit of knowing that they will just listen.</p><p><strong>[01:04:14] Dan: </strong>You had to turn off the comments on your post that argued Shakespeare actually wrote Shakespeare, and it was quite controversial. Why is this so controversial? I don't know, reading your posts, seemingly pretty clear-cut argument, but it seems to get a lot of people really heated. Why do you think people care?</p><p><strong>[01:04:30] Henry: </strong>I don't know if I want to say what I really think, because it might be rude. I honestly think that there are some people who believe in a conspiracy theory. I am not trying to be rude when I say that because they think that there is a conspiracy to conceal the truth. They have said that, and they say things like, "Oh, there's no actual record of Shakespeare going to the grammar school, and we've never found any of the books he owned." Then they'll say, "Why didn't anyone ever tell me that when I was in academia? Why didn't anyone ever--" They're actually promoting the idea that there is a conspiracy.</p><p>Now, I personally have known people who believed conspiracy theories, and it's not a serious way to discuss something like Shakespeare and everything that I have ever read from a non-Stratfordian has been written in that way. "Did you know that Mark Twain believed this?" That kind of thing. I'm like, "Everyone knows that this isn't a serious way of making arguments." They send me emails saying, "You're ignoring the facts, and you're an asshole." I'm like, "If you emailed me any facts, that would be cool, but that's not what this is." One of them sent me an email saying that, "You've said you've got all this proof. It isn't proof. You've got no idea what you're talking about."</p><p>I was like, "Tell me. Just tell me what your argument is." They were like, "I read the sonnets backwards focusing on particular details." You're telling me I'm missing three archival records, and you're saying the whole thing is bogus. This backwards reading of the sonnets, on the other hand, that's proof. That's why it gets heated, and I turn the comments off because people on both sides were starting to be a bit rude. I was like, "This isn't going anywhere," and I don't want any more emails from these people.</p><p>I made it very clear if someone wrote a piece on Substack that laid out the case really well, I would link to it or cross-post it on my blog. I will still do that, but I haven't been sent anything that doesn't open with, "Did you know that three justices of the Supreme Court believe in this bit?" "Great. So what? [chuckles] They believe lots of things that make other people really angry." You know what I mean? Are you telling me you believe everything that Justice Scalia believed? No, that's not how we make decisions. I think that's why it makes people cross.</p><p><strong>[01:06:58] Dan: </strong>Tyler Cowen gave Noah Smith the Writing Every Day award for 2023, but I view you, honestly, is comparably prolific, and here's why. Your output is super consistent, and it's quite deep. You're right on all these different topics where the topic is a book, so it takes a long time for you to actually read up on that knowledge. Noah also has the benefit of being able to write about current events, whereas you're not typically doing that, so you can't just skim the news. The question just is how do you stay so prolific? I know I've asked you this previously, and you said you don't feel prolific, but I think it is. Doing research for this, there's a lot of material to go through. How do you do it?</p><p><strong>[01:07:35] Henry: </strong>I don't know. Now that I don't have a job, I can obviously be more prolific. I'm quite boring in that this is my whole life. I don't if the word is obsessive, but I'm very, very focused on the things that I do. I'm told that when I was a child, I would just endlessly rewatch the same movie, or I would just endlessly sit there and read, and I still have that. I think that is a form of narrowness. I do less, I have less range of life than some other people in a way that comes with the advantage that it's not very difficult for me to write 2,000, 3,000 or 4,000 words a day. Also, whenever I get an idea, I just write it down, take paper with you everywhere.</p>]]></content:encoded></item><item><title><![CDATA[Jan Swafford]]></title><description><![CDATA[Classical music and artistic genius]]></description><link>https://www.danschulz.co/p/jan-swafford</link><guid isPermaLink="false">https://www.danschulz.co/p/jan-swafford</guid><dc:creator><![CDATA[Dan]]></dc:creator><pubDate>Mon, 17 Jun 2024 10:46:07 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/145710784/b04bcebc9f6ee2293ccbfec079e4e597.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Jan Swafford is an author and composer of classical music. He's written canonical biographies on Mozart, Beethoven, Brahms, and Ives, and is a composer of works such as Landscape with Traveler, From the Shadow of the Mountain, and The Silence at Yuma Point.</p><div id="youtube2-KCQWhPzpYBQ" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;KCQWhPzpYBQ&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/KCQWhPzpYBQ?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8a724624f1751437506a995515&quot;,&quot;title&quot;:&quot;Jan Swafford&quot;,&quot;subtitle&quot;:&quot;Dan Schulz&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/7ulpn47lgJpLByMGb9dYQL&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/7ulpn47lgJpLByMGb9dYQL" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/undertone/id1693303954?i=1000659244294&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000659244294.jpg&quot;,&quot;title&quot;:&quot;Jan Swafford&quot;,&quot;podcastTitle&quot;:&quot;Undertone&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:5157000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/jan-swafford/id1693303954?i=1000659244294&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2024-06-17T08:00:00Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/undertone/id1693303954?i=1000659244294" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><h3>Timestamps</h3><p>(0:00:00) Intro</p><p>(0:00:43) Beethoven and immortality</p><p>(0:02:35) Mozart and human nature</p><p>(0:04:43) Mozart and romanticism</p><p>(0:08:13) Artistic genius&nbsp;</p><p>(0:12:38) Beethoven&#8217;s late period</p><p>(0:15:10) Composers and virtuosos</p><p>(0:19:00) Beethoven&#8217;s father</p><p>(0:21:02) Influence of the enlightenment</p><p>(0:26:08) Identifying the next Beethoven</p><p>(0:31:23) Aristocratic patrons</p><p>(0:34:11) The German genius</p><p>(0:38:45) Modern art</p><p>(0:41:54) Artistic influences</p><p>(0:45:45) Explaining your art</p><p>(0:49:55) The solo composer</p><p>(0:51:50) What would Beethoven say today?</p><p>(0:55:39) Modern recordings</p><p>(1:02:22) Future of art</p><p>(1:04:57) Jan&#8217;s compositions</p><p>(1:11:47) Enthusiasts and the internet</p><p>(1:13:29) Interpreting history</p><p>(1:18:02) Music v literature</p><p>(1:23:07) Three great spiritual forces&nbsp;</p><h3>Links</h3><ul><li><p><a href="https://www.janswafford.com/">&#8288;&#8288;Jan's homepage&#8288;&#8288;</a></p></li><li><p><a href="https://www.youtube.com/watch?v=T91hWYL_Uas">&#8288;&#8288;Jan's piece "They That Mourn"&#8288;&#8288;</a></p></li><li><p><a href="https://www.amazon.com/Mozart-Reign-Love-Jan-Swafford/dp/0062433571">&#8288;&#8288;Mozart biography&#8288;&#8288;</a></p></li><li><p><a href="https://www.amazon.com/Beethoven-Anguish-Triumph-Jan-Swafford/dp/061805474X">&#8288;&#8288;Beethoven biography&#8288;&#8288;</a></p></li><li><p><a href="https://www.amazon.com/Johannes-Brahms-Biography-Jan-Swafford/dp/0679745823">&#8288;&#8288;Brahms biography&#8288;&#8288;</a></p></li><li><p><a href="https://www.amazon.com/Charles-Ives-Music-Jan-Swafford/dp/0393317196">&#8288;&#8288;Ives biography&#8288;&#8288;</a></p></li><li><p><a href="https://twitter.com/dnschlz">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Follow Dan on X&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a></p></li><li><p>Subscribe to Undertone on&nbsp;<a href="https://www.youtube.com/channel/UCtRjQ8EA3Via7KXzv9F173Q">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;YouTube&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>,&nbsp;<a href="https://podcasts.apple.com/us/podcast/the-world-of-yesterday/id1693303954">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Apple&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>,&nbsp;<a href="https://open.spotify.com/show/59YkrYwjAgiKAVMNGWPaLE">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Spotify&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>, or&nbsp;<a href="https://www.danschulz.co/">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Substack&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a></p></li><li><p>Watch or listen to previous episodes of Undertone with <a href="https://www.danschulz.co/p/tyler-cowen">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Tyler Cowen&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>, <a href="https://www.danschulz.co/p/vitalik-buterin">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Vitalik Buterin&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>, <a href="https://www.danschulz.co/p/scott-sumner">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Scott Sumner&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>, <a href="https://www.danschulz.co/p/samo-burja">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Samo Burja&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>, <a href="https://www.danschulz.co/p/3-steve-hsu">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Steve Hsu&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>, and <a href="https://www.danschulz.co/">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;more&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a></p></li><li><p>I love hearing from listeners. Email me any time at dan@danschulz.co</p></li><li><p>Share anonymous feedback on the podcast: <a href="https://forms.gle/w7LXMCXhdJ5Q2eB9A">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;https://forms.gle/w7LXMCXhdJ5Q2eB9A&#8288;</a></p></li></ul><h3>Transcript</h3><p><strong>[00:00:10] Dan Schulz: </strong>This is a conversation with Jan Swafford. He's a composer of classical music as well as a writer. His biographies of Mozart and Beethoven are considered by many to be the best ever written. This conversation was a ton of fun for me, as music is a deep personal interest, and Jan is one of the world's experts. We talk about what made the great composers capable of producing the art they did, what's happening with music and art today, and where it might be headed. We also dig into Jan's music and his personal influences. I hope you enjoy this one, and thank you for listening. Let's get right into it.</p><p>I'm here today with Jan Swafford. Jan, welcome to the show.</p><p><strong>[00:00:42] Jan Swafford: </strong>Thank you, Dan.</p><p><strong>[00:00:43] Dan: </strong>First question here. Handel was the first composer whose nonchurch music stayed in the repertoire. This happened when Beethoven was alive for him to see it and really internalize this idea that a composer could become immortal, right? For Mozart and Haydn, my understanding anyways, is that concept of immortality long after death and that their work staying as something that's really valuable to future generations, it wasn't as much of a part of who they were. You can object to the premise a little bit, but I'm curious, how much was the idea of immortality core to who Beethoven was?</p><p><strong>[00:01:19] Jan: </strong>Very much so, I think. Handel died when Mozart was 3 and Haydn was in his 20s, but I've never found a letter or anything that Mozart or Haydn wrote talking about the idea of a permanent repertoire that they're going to be part of and Beethoven did. When you consider yourself writing for the future as well as the present, and I think in Beethoven's case, more and more, he thought of himself as writing mainly for the future, it makes you a different kind of composer. It doesn't make you better, it doesn't have anything to do with that, but it makes you more aware of yourself as in effect a historical figure.</p><p>My line, I don't know if you remember this, is, at the end of the <em>Mozart,</em> I said, Beethoven wrote for humanity. He was writing for history and for the whole as what he saw as the whole of humanity, which he was trying to serve with his talent. Mozart, I say, wrote for people. He wrote for people he knew, he wrote for friends, he wrote for publishers he knew and what was happening right then. This was a time, in Mozart's time, when most music heard was new music. That's the context in which he thought of himself, and Haydn too.</p><p><strong>[00:02:35] Dan: </strong>Actually, another point on Mozart, in that biography you have a quote here which says, "More than any other composer of his level, Mozart viewed human life and behavior almost as a novelist would, but in his case, his insight finally emerge as music, not just in opera, but in all music, founded on a fascination with the world and the people in it." Can you go a little bit deeper into that? What does it mean for Mozart to view human life as a novelist would?</p><p><strong>[00:03:00] Jan: </strong>I first noticed it when he was writing letters home in his travels. When he broke away from dad and was traveling on his own, he would go to a party and then write his father descriptions of the people at the party. Some of them really quite marvelous. They're very physical or how people talk or how people move. You just see somebody-- The way he writes about them is almost Dickensian sometimes. He talked about one composer at a party who, every time he was going to walk across the floor, would lean back on one foot, hoist his belly up with his hand, and then advance forward. It reminded me of Dickens' description of people.</p><p>He had a very wide circle of friends, everything from the aristocracy to tradespeople and musical amateurs and so forth, and he was fascinated by them all, and I think he identified with them all. This was not true of Beethoven. I think one of the reasons Beethoven came to opera obliquely and a bit late is that the great model of what you should do with opera was Mozart at that time, but he knew, whether consciously or not, that he couldn't do what Mozart did, because he didn't understand other people as well as Mozart did, and he could not write comedy like Mozart did.</p><p>Beethoven had to find a higher, more ethical theme, and he finally found that in the operas of Cherubini that became his model rather than Mozart. Mozart was just a guy who was totally involved with life and people. I call him Mr. Joie de Vivre. Beethoven was not Mr. Joie de Vivre at all.</p><p><strong>[00:04:44] Dan: </strong>Yes. Let's compare these two a little bit more. I'm wondering if Mozart would have lived to be 70, let's say, and so him and Beethoven were actually contemporaries, would Mozart have followed Beethoven into the Romantic era? The second part of this would be, would Beethoven have entered his heroic phase if Mozart was still alive? What do you think would have happened?</p><p><strong>[00:05:03] Jan: </strong>I think it's a very interesting question and it's actually the kind of thing I think about a lot. I think about the fact that if George Gershwin lived into his 70s, he would have died the year the Beatles broke up. What would that have done to him? If Beethoven and Mozart had been together in Vienna, I think it would have been in some ways incredibly fertile for them both, and at the same time, it would have been harder on Beethoven because Beethoven really didn't have any rivals.</p><p>There were people who didn't like his stuff, there are always people who don't like your stuff, but he really didn't have any rivals because Haydn was still working when Beethoven showed up, but he was really getting near the end of his career as a composer. The person that critics used to beat Beethoven over the head was Mozart, who had died a few years before. If Mozart had still been around and doing what he did in his way, I think it would have been much harder for Beethoven to make his way.</p><p>I also think that Mozart would have understood what kind of a talent Beethoven was, and there would have been a great deal of cross-fertilization. Beethoven, he would have had more resistance is what it amounts to. I think Mozart would have been probably interested, but how long he would have been-- If Beethoven had written <em>The Eroica</em> when Mozart was still alive, would Mozart have been able to countenance that? I don't know. I don't know if Haydn was particularly fond of that piece. I think somebody said to Haydn when they heard <em>The Eroica</em>, "Well, we wouldn't do it like that, would we, master?" He said, "No, we wouldn't." That's Haydn, but Haydn was a lot older than Mozart, of course.</p><p><strong>[00:06:40] Dan: </strong>Yes. That's the thing, is you've got to wonder if Mozart, he was younger, so maybe the school would have been more influential on him, whereas-</p><p><strong>[00:06:46] Jan: </strong>I think that's true.</p><p><strong>[00:06:47] Dan: </strong>-Haydn would have known he doesn't have the life in front of him to really dive in and so there's an incentive to poo-poo what the young are doing to defend your own.</p><p><strong>[00:06:56] Jan: </strong>I think there are traces of Beethoven's influence on late Haydn here and there. His late E-flat <em>Piano Sonata</em> is an absolutely wild piece. I always think of that piece as him showing these kids that he could do it too. He was in London, he had much bigger, more robust pianos, Haydn did at that time. He really got into the piano in a new way. Meanwhile, he had this kid, Beethoven, who was writing very idiomatic piano music and very exploratory piano music. I don't know. Haydn may have been a little bit--</p><p>When I grew up, what they taught you in music class was Beethoven created the modern idea of the symphonies. It's not true. Haydn did. In the late symphonies, Beethoven simply picked up where Haydn left off and took it to another level but nothing he ever did contradicted what Haydn was doing. That's why I call Beethoven not a revolutionary but a radical evolutionary. Revolutionaries want to overthrow the past and the present. Beethoven had no intention of doing that. Everything he did was founded on the past, but he took it very much in a new direction with tremendous intensity, emotionalism, individuality, and things like that.</p><p><strong>[00:08:13] Dan: </strong>Yes. You have on your personal blog a post that talks about this idea that genius is distinct from talent, sort of making the claim that, to some extent, you're either born with it or you're not.</p><p><strong>[00:08:24] Jan: </strong>Oh, it's not exactly what I said.</p><p><strong>[00:08:27] Dan: </strong>Okay, yes.</p><p><strong>[00:08:28] Jan: </strong>I think to be a genius, you have to have enormous inborn talent. Talent, to a large extent, is inborn, though everybody is different, but that's not enough. If you have great inborn talent for music or painting and you never take up music or painting, you're not going to be a great painter or composer. I think there's also a great deal of work and luck involved. I don't think you can be a genius without enormous inborn talent. I think that's a delusion that a lot of people have these days, the academic left, "There's really no such thing as talent." That's complete nonsense. I've taught in conservatory and you see talent all over the place.</p><p>Then you have to cultivate it, you have to have good teachers, you have to be lucky in a way. You have to find encouragement and a milieu where you can develop the talent. Eventually, I think genius is an ability to not only surprise other people but surprise yourself. It is very much intuitive, but you still have to try. I think 90% of art is intuitive. You work in a kind of trance. They're wild horses and you have to keep them under reign, and a lot of that is talent, judgment, taste, and experience.</p><p><strong>[00:09:52] Dan: </strong>Okay, I love this topic. I've got several questions on this. If we take Mozart and Beethoven, right in their wake, you have Mendelssohn, Schumann, and Brahms.</p><p><strong>[00:10:03] Jan: </strong>Let me point out, to begin with, the two composers, one of them Mendelssohn and the other Schubert, wrote more original and lasting music in their teens than Mozart did.</p><p><strong>[00:10:13] Dan: </strong>Yes, yes, yes. That leads to a question-- Schubert is a special case because he died so young and right at the peak of his powers really. I really wonder what he would have done with another even just five years, but what I'm wondering is for these others, and let's just take like Mendelssohn and Brahms, could they have reached Mozart or Beethoven's level under the right circumstances, or did Mozart and Beethoven have some inborn talent that the others did not have access to?</p><p><strong>[00:10:41] Jan: </strong>I think all those composers had enormous talent. They had the talent in some way. Mendelssohn didn't get better. That's his particular thing. I think his late music is not radically better than what he was writing as a teenager. Many people feel Mendelssohn's best piece is the <em>String Sextet</em>, or is it <em>Octet</em>? I think <em>Octet</em>-</p><p><strong>[00:11:02] Dan: </strong><em>The</em> <em>Octet</em>.</p><p><strong>[00:11:02] Jan: </strong>-that he wrote when he was 16. He wrote the overture of<em> Midsummer Night's Dream</em> when he was in his teens. He never did anything better than that and more original. Whereas Beethoven, again, every genius is different, Beethoven was somebody who-- When I first started the <em>Beethoven,</em> I said he didn't write anywhere near the volume and the quality of music that Mozart did in his teens. Maybe he didn't quite have that same level of talent, but I realize that's probably not true. He probably did.</p><p>Beethoven concentrated on piano in his teens. He wanted very much to be a composer-pianist, and he didn't really compose all that much compared to Mozart certainly. Mozart was writing operas from age 12. Nor did Beethoven reach the kind of maturity as a composer in his teens that Mozart did. It's probably just because he didn't do it as much. As soon as Beethoven got to Vienna and studied with Haydn, you see this boom, something happens. You look at what he was writing at 19. It was very promising. It was very interesting, clearly had a huge talent. I hadn't heard this. He said this kid is really something as a pianist too, but he didn't have a sense of proportion.</p><p>His timing was off. Then he got to Vienna and studied with Haydn and suddenly had an incredible sense of timing. Just how long do I do this and how do I put pieces together? Beethoven had this capacity to make enormous jumps in a very short amount of time. The year or so he spent studying with Haydn, I think was part of that jump into what became the beginning of his maturity.</p><p><strong>[00:12:38] Dan: </strong>How important was his late period to his greatness, because he's known as the Romantic God or whatever, really for the heroic phase when he's doing <em>Eroica</em>, the <em>Fifth Symphony</em>, and everything, but it's like the late string quartets and the <em>Ninth</em> and the late sonatas are really where he was clearly achieving something that that was-- No one taught him to do that. He was really on unbroken ground. Would he be who he was without the late pieces and just the heroic phase?</p><p><strong>[00:13:05] Jan: </strong>For us, he wouldn't. The question I'm thinking about is how much did the 19th century and the Romantics relate to his late music as opposed to his middle-period music. Remember that most of the pieces of his middle period, we call it the heroic period sometimes, but less than half the pieces he wrote, probably less than a third of the pieces he wrote in the middle period are really in the heroic vein. Things like the <em>Fifth Symphony</em> and the Razumovsky string quartets in the middle, the <em>Sixth Symphony</em>, <em>Seventh Symphony</em>, these influenced the Romantics tremendously.</p><p>Remember, the <em>Ninth Symphony</em> wasn't played that much until the later 19th century, partly because it was so hard. In a way, the <em>Ninth Symphony</em> is so big and complex that you need a modern conductor to handle it and put it together. That kind of conductor didn't exist when he wrote the piece and for decades after. This idea of the virtuoso conductor is a product of the later 19th century and only then did the <em>Ninth</em> really get pulled together and find its audience.</p><p>It was played. Brahms heard the <em>Ninth</em> very early and he was tremendously influenced by it. It's clear that Mendelssohn heard the late string quartets, Beethoven, and was influenced by them. Schumann, I don't know. Rimsky-Korsakov in the late 19th century said he thought the lake quartets were just a disaster and the poor guy was half crazy and deaf and he couldn't really do it anymore. I think that was the opinion of a lot of people in the 19th century. I can't say to what extent the late music influenced the 19th century as compared to the middle. I can't really say that entirely. Brahms' did.</p><p>The first piano concerto, the D minor, you cannot imagine that if you hadn't heard the <em>Ninth Symphony</em> in D minor.</p><p><strong>[00:14:54] Dan: </strong>Yes.</p><p><strong>[00:14:56] Jan: </strong>Mendelssohn, I think is see late Beethoven quartets in Mendelssohn at some point in his quartets, but maybe not those later ones. I'm not entirely up on all that. It's a very interesting question, though.</p><p><strong>[00:15:10] Dan: </strong>You talked about the role of the virtuoso conductor, I'm actually curious about the role of the virtuoso instrumentalists, for example, Mozart when he met Joseph Leutgeb, the horn player, it pushed the limits of what he thought you could do in a horn concerto. To what extent, you yourself are a composer, do you feel constrained by what performers are actually able to play?</p><p><strong>[00:15:32] Jan: </strong>Oh, absolutely. It's not a matter of constraint. It's a matter of knowing, knowing what instruments can do and what they can't do. You can push that, but you can only push it so far. This may or may not be relevant, but there's a story-- I studied with Jacob Druckman at Yale. I was his assistant in the Electronic Music Studio. He said when he first started doing electronic music, one of the things that most interested him was that he could create rhythms that were more complex with electronic music than humans could play.</p><p>He did that. He said he discovered it didn't sound like anything, that it just sounded like rocks hitting a tin roof. There was no life in it. He realized that what he wanted to do was if he could write difficult rhythms but he wanted the effect of people struggling to play those rhythms because that's what brought it alive. I've heard a certain amount of electronic music that is just very virtuosicly facile and it doesn't add up to anything. I've done it myself when I was first doing electronic music.</p><p><strong>[00:16:37] Dan: </strong>Yes, yes, yes. Going back to this idea of talent, I want to stick on it for a second, how generalizable do you think the great composers' talents were? Let's say, Leopold, Mozart's dad, he put him up to literature or painting, do you think that Mozart would have been in the very top tier in one of those domains as well?</p><p><strong>[00:16:54] Jan: </strong>Leopold was an incredible-- He's the only teacher his children ever had in music and everything else. If Mozart had had the same kind of talent he did and didn't have that father, well, an example of that is Beethoven. One of the reasons Beethoven didn't mature as fast as Mozart is he didn't have a Leopold Mozart. He had his father, Johann, who was alcoholic and a mediocre musician. Johann tried to bill his kid as the next Mozart, including lying about his age, but it didn't really work even though Beethoven was a hell of a prodigy.</p><p>When Neefe became his teacher, got to Bonn and met Beethoven at age 10, and started teaching him, he wrote an article saying this kid is going to be the next Mozart, but Beethoven didn't develop as a composer as much in his teens as Mozart by half. Nowhere close. That's partly because he didn't have a Leopold. Neefe was a good teacher but not a great one probably. Beethoven really is as much self-taught as anything, both as a performer and a composer.</p><p>Whereas Mozart had his father. He was a skilled composer, by the way, and reasonably. He quit composing when he saw-- His job, Leopold's job became his son at a certain point. He quit composing. I think he just said, why bother? He was a pretty good composer and pretty well known and his hand appears on Mozart's manuscripts into Mozart's 20s. He would still make suggestions and even corrections.</p><p>When Mozart was writing <em>Idomeneo</em>, he was in Munich and he was writing letters back and forth to Salzburg with dad, and Leopold was making a lot of suggestions and Mozart took some of them. His father, at one point said, I hear this scene in my head, it's got trombones and it's uh-uh. Mozart did exactly what his father suggested. This is Mozart's first-grade opera, but he was still listening to dad when dad had good suggestions. When dad tried to run his life, that's when Mozart was beginning to break away.</p><p><strong>[00:19:00] Dan: </strong>Yes, yes, yes. How should we think about Beethoven and the relationship with his dad? On the one hand, it's a tragic story when you think about it. He suffered abuse as a child from his father. On the other hand, his work was a direct result of or direct consequence of the abuse that he suffered in some ways. It made him who he was. My question is, can we have it both ways? Should we be grateful to his father?</p><p><strong>[00:19:26] Jan: </strong>His family gets a bad rap in some ways. He wasn't a helpless alcoholic until his wife died. After his wife dies is when you start hearing the stories of him rolling in the gutter and that kind of thing. He was alcoholic but he was functional. Again, we're talking about Beethoven's father. He was a respected voice teacher in town. He did not have a good voice but he was competent. He sang in the choir. He was probably a decent voice teacher. He did a lot to show his kid around. He had him play all over the area. He gave concerts in the house, where they'd open the windows and people would gather outside and listen, and he found his son teachers.</p><p>He started him off brutally, would beat him up and lock him in the cellar and things like that, which was probably the way he had been taught by his father, but his later teachers were not like that. Beethoven had a very good teacher from age 10 on, which was Neefe. He didn't study composition or piano that much with Neefe, but still, Neefe was a good and not a mean teacher at all. I think partly you could say that Beethoven when he was a kid advanced in music at first because dad made him and then it became a way to get away from dad and get out from under dad.</p><p>Beethoven and Mozart both had dad problems, but very, very different dads and different problems with them. Beethoven was making the money for the family by his mid-teens. He would give an allowance to dad who was pretty drunk by then.</p><p><strong>[00:21:03] Dan: </strong>Yes, yes, yes. How do you think this turns out for Mozart? He wasn't really the suffering genius, but his music has this joyful happiness to it, whereas of course, Beethoven is very emotional, very fraught. There's a lot going on inside that head. Do you think it is a direct result of their life or is it more the context under which-- Beethoven was also a big fan of the Enlightenment and Napoleon at one point in time. Which of these do you think was a bigger impact, cultural influences, or the direct result of their daily life?</p><p><strong>[00:21:36] Jan: </strong>A certain amount of it is just personality. Mozart had a very <em>joie de vivre</em> personality. He was a showbiz kid. He was one of the most famous people in the world from about age six. That was not Beethoven. Beethoven was not as Mr. Joie de Vivre because he was sick all the time. He'd been sick starting as a teenager. He had depressions, his gut was a mess his whole life. Meanwhile, he was just a more emotional and intense person. As an artist, Mozart to me is the charmer and the seducer. Beethoven is a guy grabbing lapels and saying, within 2 inches of your face. "I'm telling you something very important and you must listen to this."</p><p>That's a different artistic personality, which partly came out of their individual personalities. Meanwhile, Beethoven was not, I don't think, in his sensibility-- He was an Enlightenment person very much. He was a product of the 1780s, which were the revolutionary decade in Europe. It was the decade of the American Revolution and finally the French Revolution. There was this incredible sense of new things in the air and human potentials with science and constitutional governments that were new in the world. Beethoven was absolutely part of that, but he was not a romantic in his sensibility. He was an 18th-century person. I think it's very important, Beethoven's audience in his maturity were romantics.</p><p><strong>[00:23:10] Dan: </strong>Oh, yes, yes.</p><p><strong>[00:23:12] Jan: </strong>That leads to one of my old lines, it's something I discovered when I was writing the Ives book. There are three important things when you're an artist and before the public anyway. One is what you think you are doing, consider yourself to be doing. Two is what your audience considers you to be doing, which those can be quite different and even contradictory things. Three is your response to what your audience thinks you're doing because your audience may influence you.</p><p>I think Beethoven's audience who were romantics did influence him and did have something to do with the late music. For example, one of the critics-- He read everything written about himself. Beethoven did. He certainly read E.T.A. Hoffmann, who was an arch-romantic. Wrote fantasy stories that are still known, which are really quite fantastic. He was also a composer and music critic, Hoffmann. Beethoven read Hoffmann's, things which are now considered to be the foundation of romantic criticism of Beethoven. He's writing things like Beethoven presses the levers of fear and the uncanny, all these qualities that romantics loved and the bizarre.</p><p>When Beethoven was young, his music was condemned for being bizarre. When the romantics got ahold of it, they praised it for being bizarre. I think that came back. Meanwhile, two things, he thanked Hoffmann. He sent a note to Hoffmann. He said, "Thank you. I've really appreciated you writing about me." There is a letter he wrote to a young friend of his that I found was sounding very peculiar. I suddenly realized it is in the vein of Hoffmann's stories, which he had read by then.</p><p>There is a romantic who absolutely Beethoven is aware of what he's saying about his music. I think it gave him a new angle on his own work and new possibilities. I won't go so far as to say exactly how, but it may have stimulated and emboldened him toward the late music. Being deaf did, too. It was in his head. He couldn't hear music outside anymore. It was all in his head. That takes on a very particular quality. Well, I won't say particular. It takes on a quality. I won't be too precise about it because it's not precise.</p><p><strong>[00:25:26] Dan: </strong>That fact is just so mythical to me. It's almost like it's scripted or something. It's so crazy.</p><p><strong>[00:25:35] Jan: </strong>It's also amazing that he completely changed his orchestral sound when he was deaf and his string quartet sound when he was deaf. To me as a composer, that's beyond belief. Some things in the 19th century didn't work. He did some miscalculations, but not that much. Mostly he just created this wholly new and fantastic, huge, big orchestral sound. The string quartets, the late quartets, some of the pages look like Schoenberg on the page. They're so wild in terms of texture and color. He's combining four different textures sometimes.</p><p><strong>[00:26:08] Dan: </strong>It's so crazy, almost as to be divine at times. As a teacher, let's say the next Beethoven was in your class at the conservatory, do you think you'd actually be able to spot them? What characteristics or signs would you look for where you would say, "Holy cow, this student is something special"?</p><p><strong>[00:26:27] Jan: </strong>That's a very good question. Basically, you're teaching a conservatory, you get these people who are thrown into this situation of just a boiling cauldron of music all the time. They all grow and they all change, but you see some changing like this and you see others taking off like a rocket. Those are the ones, of course, who have talent and imagination.</p><p>Almost all those who take off like a rocket, at a certain point, they plateau out and stay there pretty much. The ones who keep going and going and going, those are the ones who had the potential for genius. Have I ever had a student that I felt had that? No. Do I have that? No. Even though I've certainly grown and changed, and I think I'm pretty good as a composer, and I think I just wrote my best piece at 77, it's not getting on a plateau, and the same is true of a performer.</p><p>Most performers, when you first start getting serious about an instrument, you get better very, very quickly. To get from nowhere to pretty good takes X amount of time. To get from pretty good to very good takes X squared or more. To get to be really fantastic and to make these tiny, tiny gains but keep going and keep going, it takes an enormous amount of effort, time, talent, and discipline.</p><p><strong>[00:28:08] Dan: </strong>We will come back to your best piece being written at 77 because I want to dig into that, but one follow-up on this point here. Let's say you were convinced. Without a shadow of a doubt, you're saying "This student has Beethoven-level potential," but of course, the modern world is very different from where these composers grew up. In the past, you could take the institutions and structures. Just the whole realm of classical music is much different. Knowing what you know about how classical music works today, how would you guide them to make sure that they reach their full potential and really go down in history as one of the greats?</p><p><strong>[00:28:38] Jan: </strong>I wouldn't know that. I would know somebody had tremendous potential. If I watched them through their first 10 or 15 years of their career, I might say, "Geez, this person is--" The mechanisms and the kind of milieu that Beethoven had in his time don't exist anymore. There was a real system for recognizing and fostering talent, a lot of it by the aristocracy and the church, and they had the money to do something about it.</p><p>Beethoven didn't have to go and spend a lot of time yelling, "Look at me, look at me, look at me," because he got to Vienna, there were immediately people who knew of phenomenal keyboard player when they heard one and knew a great composer or a budding composer when they heard one and they would do a lot of the promoting themselves. These days, as a composer, you try to get a teaching job because that's the only way to make a living unless you have a lot of money. Then you have to be an entrepreneur. You have to sell yourself.</p><p>In a world where you can't make a living at it most likely, and where there is no milieu to foster composers and bring them along, you have to do it yourself. You have to be a creator and an entrepreneur at the same time. This is something I'm terrible at. I'm a terrible careerist in every respect. I would tell students and I have told students, "You have to be a careerist. You have to--" One way a lot of composers do it is form groups of composers and give concerts. Those groups are popping up and disappearing all the time, but some stick around. That's one way to get noticed.</p><p>I tell people to make friends with musicians and write music for them that will make them sound good and that they'll like. If you have a really good musician on your side, especially a prominent one, then that's going to make a huge difference. Those are the mechanisms, or you can do what one composer I know does which is send you his music and then call you about every three weeks to hound you about it.</p><p>I say that because the guy I'm thinking about it worked, even though I don't think much of his music. Somebody was telling me about this. He said he sent him his piano concerto and he would get a call like clockwork every six weeks. Every time he'd have a new idea about how to do it. "Oh, I can hire some ringers," or we can do this, that, and the other, the guy kept saying. He conducted a college orchestra who could begin to play this piece. This went on for a couple of three years. That guy eventually had a certain amount of success because he just was absolutely relentless in promoting himself. Certainly, somebody like Philip Glass, that's exactly what he did. He is a brilliant self-promoter.</p><p><strong>[00:31:24] Dan: </strong>Yes, yes, yes. Actually, just generally, what do you think of that, the role of the aristocratic patron and the church in just 18th to 19th-century art more broadly? I feel like some people don't like it because it made composers subservient to this ruling aristocracy. On the other hand, what you're describing now doesn't necessarily sound like it's fostering more innovation or necessarily better for artists. Was there actually something quite right with the aristocratic method of going out and funding young artists?</p><p><strong>[00:31:55] Jan: </strong>They had money and in some cases they had taste. Vienna was interesting because the aristocracy was pretty superficial. The aristocracy all over Europe were often quite superficial, though some of them were very smart. Basically, they often competed with one another. Music was status for them. Having a stable of musicians and composers you were fostering, this was status in the aristocratic world. Meanwhile, they had a whole lot of money. It was changing, though, in the 19th century. By the middle of the century, the main way you made a living as a composer was publishers.</p><p>Music publishing was taking off in Beethoven's lifetime. He was the first composer to be published. All of his music pretty much was published from the beginning. It's partly because there were innovations and engraving and things like that that made it much faster and cheaper to put music out. Also, there was a growing audience for music, middle-class audience, and Beethoven's music contributed to that. Remember, Beethoven was not played in public that much. Chamber music was never played in public. Piano sonatas were never played in public in Beethoven's lifetime.</p><p>His string quartet, that he was associated with, run by a guy named Schuppanzigh, were the first quartet in Europe to do a public subscription series, and it didn't last that long. They mostly worked for aristocrats. There were public concerts of orchestra music in Vienna, but there was no standing orchestra at all in Beethoven's lifetime. If you wanted to put in a concert, you had to scrape together the orchestra yourself and pay them. There were some concerts in the park and things like that in the summer. Our notion of a concert life didn't exist. It was mostly private music enthusiasts.</p><p>The first performances of <em>The Eroica</em> were in a room not much the size of a large banquet hall for audiences of 20, 30, 40 people. As the century went on, partly because there had been this incredible run of genius, Bach, Handel, Haydn, Mozart, Beethoven, and then Schumann and Schubert, music just boomed in the 19th century. It became profitable to publish it and publishing was much more cheaper and easier, so the sales of cheap music took over as a way for composers to make a living.</p><p><strong>[00:34:12] Dan: </strong>Question on those really early legends. I know Germany is a new concept as a sovereignty, but broadly speaking, you could classify Bach, Mozart, and Beethoven as of German descent. Could they have been French or Italian? Why is it that these three all happen to be German?</p><p><strong>[00:34:28] Jan: </strong>They could have been. They would have been different composers if they were.</p><p><strong>[00:34:32] Dan: </strong>Would they be who they are today, though?</p><p><strong>[00:34:33] Jan: </strong>They might have been great composers. Again, a certain amount is luck. They would have had to be lucky too in a different way, in a different place. As I say in my book about Beethoven, he grew up in Bonn, which was one of the most intensely enlightened and progressive states of all the many states in Germany. If he'd grown up somewhere else other than Bonn, he might have been a very great composer, but he wouldn't have been the same. He wouldn't have been as inculcated the same way with Enlightenment ideals to which he was--</p><p>Beethoven was told, "You have great talent. It is your duty to use that talent to benefit humanity. That is your task." He stuck to that to the rest of his life. That's an Enlightenment idea. All these people, if they'd been French again, might have been, but they would have written French music because there wouldn't have been any alternative. You don't grow up in France and write music that sounds like German music, by and large. That's what Berlioz was accused of and he was pretty Germanic for a French composer, but he was still basically French.</p><p><strong>[00:35:36] Dan: </strong>I guess what I'm wondering is if there's something about Germany at that time that was fertile ground for young talent to come up and get into music.</p><p><strong>[00:35:46] Jan: </strong>It was just a great place to be a musician, that's all. Think, Beethoven grew up when there was a court musical establishment, an orchestra, an opera, a theater. He played in the opera orchestra. He accompanied opera rehearsals. They played all the latest stuff, so he heard all the new Mozarts within a year of their being premiered, probably. He heard everything. He played in the orchestra, and he played at court. He got paid for this. You couldn't make an international reputation in Bonn. You had to leave for that, and he did, but it was a fantastic place to grow up as a musician. There were other places like that, in Vienna, and Berlin. There were places all over Germany.</p><p>Mannheim was another place like that. Mannheim had probably the greatest orchestra in the world for a while when Mozart was young. It fostered composers. None of them were in Beethoven's league or Mozart's league. There was just these fertile music places where you had all kinds of experience. Beethoven was also a church organist. He was assistant to Neefe as a church organist. He was playing piano, he was playing organ, he was playing violin and viola in the orchestra. He was listening to new operas. He was playing in court as a soloist. He was traveling as a soloist with the court orchestra. This is an incredible way to grow up as a musician.</p><p><strong>[00:37:10] Dan: </strong>Yes. Just by the way, the reason I'm so interested in these types of questions is it seems obvious that if you have this cluster of really talented people, they're all from broadly the same area, there must be something causal about the culture and what's in the water or whatever. The question of how do you get more Beethovens or Bachs today surely has something to do with figuring out the culture and the environment that people are able to grow up in.</p><p><strong>[00:37:35] Jan: </strong>The culture. Italy had a fantastic musical culture, but it had largely to do with opera. If you grew up in Italy, that's what you did. When Mozart's dad, when Leopold said, "Okay, we're going to gear you up to write opera, so we're going to go to Italy because that's where you learn to write opera." It has to do with the Middle Europe. Let me tell a story that I think is great in terms of the idea of a Middle Europe. When Rossini was young, there was a system of going from city to city and writing operas.</p><p>When you were a young composer trying to get known for writing opera and you'd breeze into town, you'd meet the singers. You would, on the spot, write an opera for these singers, piano accompaniment. It would either be a hit or it was not. If it was a hit, you would have a lot of girlfriends, a lot of wining and dining. You would be lauded. Then you go on to the next town and see if you could do it again. What a fantastic way to become an opera composer. Though, of the people who went through that system, there was only one, Rossini. I don't know if anybody else made a great-- That had to do with his particular talent, gifts, and luck.</p><p><strong>[00:38:45] Dan: </strong>Some questions, these are all going to start to point towards the modern era and where music has evolved to. Why do you think it is that different art forms evolve together? 20th century, you've got Picasso, serialism. They're all making the same point. Then in literature as well, you've noted before that Faulkner was influenced by Einstein's relativity and Freud. It seems like there's this macro zeitgeist that catches everybody. Why is that? Why does all art seem to move together?</p><p><strong>[00:39:15] Jan: </strong>In the first place, you used the word zeitgeist and I believed zeitgeist is a real thing. It's a temperament. As you said, I think Faulkner was influenced by Freud, but I don't know if he ever read Freud. I doubt it. It's possible, but I doubt it. These ideas were just in the air. If you were in the arts, around the arts, you picked up these ideas by osmosis. Every period has its zeitgeist. Somebody like Beethoven was so powerful that he changed the zeitgeist. First, he became part of it. He grew up in the Germanic tradition of mainly Haydn, Mozart, Bach, and Handel.</p><p>Handel was the only composer Beethoven considered as superior, by the way. He considered others as equals, but not as superiors. The Romantic zeitgeist was something he, I think, gradually absorbed. Meanwhile, what he absorbed was the 1780s, the zeitgeist of the Enlightenment and the revolutionary decade. I don't think that any artist completely escapes the zeitgeist, though everyone responds to it in their own way.</p><p>Somebody like Charles Ives who was writing music was so strange in the terms of his time, and he was so independent because he had to be, he was still very influenced by the ideas around him and very much by the idea of evolution, which was the big, fat, intellectual, scientific idea of the era, but it was applied both rightly and wrongly to sociology as well. So much of Ives had to do with the concept of evolution, whether you put it that way or not, evolution in the arts and in humanity in the human spirit.</p><p>I don't know what Jung would say about all this in terms of the zeitgeist and the world spirit and things like that. I just think you're influenced powerfully by the things around you and you can't help it because that's what's around you and you begin by imitating. That's how you begin as an artist, by imitating, by and large. Mozart was a phenomenal mimic. He could breeze into a town, listen to the local composers, and start writing in their styles right away, but that's something he had to get beyond, just like he had to get beyond his fame and his being a prodigy, because when he grew up, he had to show up and say, "Hey, I'm not a kid anymore."</p><p>There's no alternative. As an artist, you don't come out of nowhere. You come out of your time and its zeitgeist. You may be independent in many, many ways, and I think I'm a very independent composer. I don't write like anybody else, but I did it for a while, and then I gradually moved beyond it.</p><p><strong>[00:41:55] Dan: </strong>Here's a question for you as a composer, I had this frame just like more generally, but I'm curious for your own experience with it. We just agreed here, it does feel like there is a zeitgeist and everybody is wrapped up in it. It's really hard to ignore, but as a composer, you can think about picking and choosing your favorites for influences, right? What I'm curious is it doesn't seem like there's that many people that are able to ignore certain movements before them. For example, after Beethoven, nobody in the Romantic era, at least as far as I'm aware, that was really big, completely ignored him. You had to grapple with the fact that Beethoven was there. Then--</p><p><strong>[00:42:30] Jan: </strong>Like it or not, yes, you had to deal with it.</p><p><strong>[00:42:33] Dan: </strong>You had to deal with it. It seems like the same thing for Schoenberg nowadays. I don't know. Maybe some people do ignore him, but it seems hard, at least, for people to just move on. No one is just going back and saying, "I'm just going to write in the style of Mozart and Haydn." Why is that?</p><p><strong>[00:42:50] Jan: </strong>Well, because you've had sounds in your ear that are not Mozart and Haydn. Unless you're completely dishonest, you can't pretend that you haven't. Though, you can certainly write in retrogressive ways. You can write regular chords and tonal harmony, what are called the Atlanta School today. People like Jennifer Higdon are very much writing traditional veins and having a lot of success at it, but you can't pretend that you aren't who you are, and who you are is to a degree your experiences. Whether you like Schoenberg, for example, or not, you've heard him.</p><p>One thing, though, about the zeitgeist, the zeitgeist when I was coming up in graduate school was dominated by serialism. There was a phase when students were little to be told if you're serious you write serial music. If you're not a serialist, you're not serious. That was ridiculous. It was never anything but ridiculous. When I was in graduate school I saw all these what I call petit revolutionaries running around trying to revolutionize everything. I said, "This is nuts. This is crazy. Most of this is just nonsense." There aren't that many genuinely new ideas in the world. They don't happen that often and when everybody's trying to create them, it's chaos.</p><p>I would say that the zeitgeist now in the arts is chaos. That's what I think it is. It's anarchy and chaos. To find a grounding for yourself as an artist in any medium, and that I think is unique to this period, because you're grounding yourself in a period of chaos it's like you're trying to find firm ground in a flood. I think my line about what you have to do as a composer is that there's the-- Lying around you is the rubble of all past systems of music. There's tonality, late tonality, the Renaissance, the Baroque, twelve-tone, serialism, minimalism, primitivism, and futurism.</p><p>All this stuff is just lying around. What you have to do as a composer is pick up bits and pieces of these ideas, techniques, philosophies, and aesthetics and from that cobble together your own thing. When I was at Boston Conservatory, we had a big composition department. I think when I left, there were 30 some, maybe even 40 students and maybe 5 faculty. All those people wrote in 45 different ways and none of them were straight serialists, and none of them were straight minimalists, yet all of us were influenced by serialism, minimalism, neoclassicism, and Cage. All those flavors and elements were circulating around among all these composers but in a different way in every person.</p><p><strong>[00:45:46] Dan: </strong>Got it. You've said before in another interview that twelve-tone music, basically, it became an orthodoxy because Schoenberg explained it. Then you have other composers like Bart&#243;k who never really actually explained their work. The question to you is, should they be explaining their work?</p><p><strong>[00:46:03] Jan: </strong>Bart&#243;k refused to teach composition. He said, "It can't be done. You have to find your own way." That's one of the reasons he was practically starving when he got to America. He refused to teach composition. He only taught piano.</p><p><strong>[00:46:16] Dan: </strong>That's interesting.</p><p><strong>[00:46:17] Jan: </strong>What you just said, you may have picked this idea from me at some point because I really believe that's true. Why did so many people adopt Schoenberg's system? Because he explained it. If Bart&#243;k had explained his system, there would probably be a lot more people writing like Bart&#243;k. Hindemith explained what he was doing, so a lot of people wrote like Hindemith. That's just how it is. I have a very ambivalent attitude towards serialism. I like a lot of Schoenberg, but he's the only twelve-tone composer I like a fair amount of.</p><p>I tremendously love early Webern, what's called a free Anton Webern. When Webern got very serious about serialism, I don't like his music anymore. Sorry. To me, he started playing Mahjong with notes, which is what I call most serialism. Schoenberg is passionate. Schoenberg is expressive, which he isn't given much credit for but he is. He's often expressing despairing and dark things, but that's how it goes. Pierrot Lunaire, it's just a matter of he took up these very strange poems.</p><p>I'm going to tell you another great principle of creativity, which you're not taught in school. You're taught in music school that innovation is entirely a technical matter. You bend this, you break that, you extend this, you write new kinds of this, that, and the other, and that's what innovation is. Every one of the great musical innovations came from trying to express something outside music. <em>The</em> <em>Eroica</em>, the greatest revolutionary symphony ever written was called <em>Napoleon.</em> It was called <em>Bonaparte.</em> It was a piece about Napoleon.</p><p>Wagner wrote music dramas. He evolved a revolutionary musical style to express his stories and his characters. Schoenberg, when he started to write, Pierrot Lunaire had these very strange, decadent, wonderfully weird poems, who evolved a very weird musical language to express them, and he did it in a traditional way, what's called a melodrama, which is speaking to music, speaking verse. Mozart used this idea. It's an old idea, but all Schoenberg did was said, "I don't want to have it. I want to have more control over the speaking than melodrama has in the past, so I'm going to notate it precisely." That was the only difference. The language was to express those strange poems.</p><p><em>Rite of Spring</em>, another one of the great innovative things was a story about primitive Russia and so forth and so on. Every great musical advance has to do with trying to express something outside music that is maybe new in itself. The Ballets Russes was doing revolutionary kinds of dance, so it called for revolutionary music. Schoenberg was very much a part of <em>fin de si&#232;cle</em>, wonderful decay of romanticism into the <em>fin de si&#232;cle</em>. He was part of that, and he contributed to it.</p><p>When I was in graduate school, I said everybody in all the arts I knew they were trying to be revolutionary, and I just thought it was stupid. A lot of people in graduate schools at that time were writing as if they were writing serial music in 1918 or 1922, and I also said that is idiotic. It pretends that music is just technique and it has nothing to do with its time and place in the world and that is stupid. It's not true. It's ridiculous to try to write as if you're a German. For American two-year-old graduate students to write music as if they're Germans in the <em>fin de si&#232;cle</em> [laughs] in 1919 it is foolish. That's not how art works. It's not how life works.</p><p><strong>[00:49:56] Dan: </strong>On this, for composers, you did mention that, at least lately, some of the strategies that people have used to get their work out there has been collaborating with other composers. All of the greats, for the most part, they all did this solo. Why is it that there aren't any--</p><p><strong>[00:50:09] Jan: </strong>No. It's not true. Schoenberg founded the Society For the Private Performance of Music. There were private concerts where critics were not invited. It was all by invitation. He did have a group. He had his students, Webern and Berg, who were brilliant composers whom he fostered. They had their little group and they had some other people involved. You can say that's the first case maybe of a composer forming a group. Certainly, Brahms would never have done anything.</p><p><strong>[00:50:39] Dan: </strong>I guess the question just is before Schoenberg and the 20th century, why were there no teams? Everyone did this solo. Why didn't we have a group together or a duo or something like this?</p><p><strong>[00:50:52] Jan: </strong>Well, there were certainly affinities. Schumann was associated with Mendelssohn and against Wagner and Liszt, basically. Wagner and Liszt went and to a degree, Berliozt, who was older, they were a bit of a team, but they didn't do concerts together. Your relationship was with publishers, really, and performers, and you've made your career as an individual. In the 19th century, you tended to be affiliated either with the Brahmsian train or the Wagner and Liszt train.</p><p>You were either writing program music in the vein of Liszt or in the tradition that Liszt invented, he invented the tone poem, or you were sticking to what was called abstract or pure music in the vein of Brahms, who did not write on stories or if he did, he never admitted it, except that occasionally dropped little hints about its connection to his life because his music was connected to his life. He just didn't like to talk about it nor hear about it.</p><p><strong>[00:51:51] Dan: </strong>If Beethoven stayed alive, who do you think he would have sympathized more, and who do you think more correctly got his influence? Would he have woken up and said, "Wagner, you got it. You understood. You keep going." Or would he have said--</p><p><strong>[00:52:04] Jan: </strong>Wagner certainly painted himself as the ear of Beethoven, but he also said this, "The symphony is dead, and the future is basically me."</p><p><strong>[00:52:11] Dan: </strong>Yes. Would he agree or would he say, "No, you fool. You're supposed to stay within the confines of symphony and work within these parts,"?</p><p><strong>[00:52:19] Jan: </strong>Or would Beethoven, if he'd lived another 20 years, would he have written more program music? Because the fact is the program traditions was founded basically on Beethoven pieces like the <em>Pastoral</em> <em>Symphony</em>. Also, everyone in Beethoven's overtures is very programmatic. You hear the story, you hear the characters. He didn't like to admit-- Beethoven actually put down the whole idea of program music, especially when Haydn did it, and yet he did it anyway. He was very good at it, but he never quite admitted that that's what he was doing except in cheesy commercial pieces like <em>Wellington's Victory</em>, which Beethoven, as he said, "I know this is crap," he criticized it, "but this crap is better than either you ever did." [laughs]</p><p>Beethoven might have might have felt less self-conscious about program music. That's about the only thing I can think of. Would he have been influenced by Wagner's music dramas? He wouldn't have lived long enough to see those in their maturity anyway.</p><p><strong>[00:53:26] Dan: </strong>I just wonder if he approved of the concept or if he would've said, "Brahms, you got it right. This is what I was pointing towards." I'm not sure.</p><p><strong>[00:53:35] Jan: </strong>I'm not sure either. I think he would have been-- He never admitted being influenced by anything, but he was anyway. Just like he never admitted writing program pieces, but he did anyway. I don't know. I can't say. I think he might have found more affinity in Brahms because Brahms was so close to him. I've heard an imminent Brahmsian say the first two Brahms symphonies are Beethoven's 10th and 11th, and the third is the first real Brahms symphony, which isn't to say the first two aren't great pieces, but they are so overtly in Beethoven's line of thought.</p><p>As somebody said to Brahms about the first symphony, gee, that chorale theme in the last movement is very reminiscent of Beethoven. Brahms said, "Yes, any jackass can see that. What you bet is, yes, it's true but I contributed something too. It doesn't really sound like Beethoven exactly. Sounds like me, you jackass."</p><p><strong>[00:54:27] Dan: </strong>I've heard you say that the very first show that got you into classical music was you saw Bernstein conduct Brahms 1. Is that true? What influence did that have on you?</p><p><strong>[00:54:38] Jan: </strong>I played that melody with all these Tennessee junior high honors band a year before and I just loved it, but I had no idea who wrote it. Meanwhile, I'd forgotten it. I was sad that I couldn't remember that wonderful melody anymore. Then suddenly in this concert, there it is and I was just changed as a person. I think that's, I always say that's when I became a musician. I was talking to Laith Al-Saadi about this once and he said for him it was the middle part of the second movement of the third symphony, that huge fugue, which is one of the pinnacles of music, I think. That same thing had a lot to do with my being a musician too. I know Amanuel Ax and he said for him it was Brahms' second piano concerto. There are these moments that just knock you over and change your life. You have to be ready for them. You have to be primed. When they happen, they they happen.</p><p><strong>[00:55:41] Dan: </strong>Alex Ross, the New Yorker, journalist, he wrote about the classical music recordings in 2023, and he commented, "I can't remember a year of so many pleasure-inducing, addiction-triggering albums." Basically, what he's saying here is he thinks that the world of recording anyways is still really, really great. Do you agree? Do you think that the world is still generating great recordings?</p><p><strong>[00:56:01] Jan: </strong>Well, I think he was talking about the pieces. And one of the things about the current situation, in music anyway-- Visual arts are very different. You can throw a pail of dirt in the corner and get paid $1 million for it. That's the situation of the visual arts now, and I'm not really exaggerating that much. You can't do that as a composer, but here's what I think. I think in the '60s, minimalism appeared, and it did. I don't like most minimalist pieces. My line is I don't like minimalism except for the pieces I like, which is mostly Steve Reich music for 18 musicians. What minimalism did is break the power, the stranglehold that serialism had in the academy and it became so simplistic, often childishly simplistic writing stuff that a four-year-old could understand. It opened up everything in the middle.</p><p>I'm one of those people in the middle. What you have now are people like, Jennifer Higdon writing very 19th-century music that really calls 19th-century romanticism. You have still some serial composers, I'm sure here and there, but you have other people writing it. Just an amazing variety of voices, and some of those are very appealing. Caroline Shaw who won the Pulitzer a few years ago with this marvelous vocal piece that had-- I just remember feeling that this had elements for the last several 100 years in it and put together in a way that was entirely new and her own and that's the kind of thing that-- a piece that's tremendously appealing without having feeling like it's pandering in any way. That's the kind of thing I'm sure that Ross was talking about. There are other composers like that.</p><p>I have to admit I don't systematically keep up with new music as much as I probably should. A lot of what Ross is probably talking about, I don't even know.</p><p><strong>[00:57:56] Dan: </strong>Actually, a lot of what he was commenting on in this specific one, yes, he does write a lot about new contemporary music, but it was specifically recordings. I think what he's getting at is there are still great pianists coming out with full cycles of the Beethoven piano sonatas. There are still great string quartet recordings coming out of some of the old repertoire. It does seem like, at least in his view, this is surprising because classical music as a whole, the narrative is revenues are down and people are less interested. At the same time, it seems like the 10th best pianist today is as good as way better than the 10th best pianist maybe at any time ever in history. I'm just curious if you agree with that sentiment.</p><p><strong>[00:58:36] Jan: </strong>I can't say in terms of pianists. I really don't know. Not knowing what Ross is talking about, I'm not sure I can comment on that except to say that there's this huge stew of stuff going on. It's a great time to be an anarchist. I call myself politically a right-wing socialist and creatively an anarchist classicist because I believe the classicist side of me wants to be things just so, and to try to write music that sounds like it wrote itself, which is what I think the classicist classical period was trying to do. At the same time, I'm a bit of an anarchist, and I'm comfortable with that. In a way, it's a great time to be an anarchist because there is no way to write music. There is no technique. You have to make your own technique.</p><p>You may draw on previous techniques, but just to draw blankly on serialism these days, I think just is anybody still doing that? I don't know. There are many other things that are involved with that. One of the reasons that serialism triumphed after World War II was that it was an antidote to chaos, that the composers felt that it was a rational system of composition that spiritually was an answer to the chaos of two world wars. After World War I, Schoenberg turned to 12 tone, which was much more organized, and Stravinsky turned to the neoclassic stuff which was much more constrained. You can't leave those things out either.</p><p>Now, I'm wandering again, but I think this is all relevant. I'm wandering away from what you said about Ross to another issue, which is that I think the deconstruction movement of 20 or 30 years ago had, to me, a disastrous influence on art because it said the reader is superior to the writer, any cockamamie anything you want to say is perfectly legitimate if you can manage to get tenure for it. I think that in an era when there are no longer any agreed-upon standards of what's right and wrong-- let's forget about right and wrong, good and bad, in the arts, it creates a situation we are in right now, which is the general critical opinion that you see everywhere is nothing but-- what justifies art is politics. It is saying the politically correct things that make you a worthwhile artist in that because what else is there?</p><p>What is the wonderful Black poet, the young woman who read at the inauguration of-- She came out of Harvard. What is her name? In the comments, I've heard about her is she's going to write about the Black experience and Black suffering, as if writing good poetry doesn't have anything to do with it. I'm of the school that, I think Joseph Brodsky said once, he said, I'm paraphrasing, "The artist's main responsibility to society is to write well." Beethoven wrote a symphony called <em>Bonaparte</em>, and that was very significant in a political sense, but if the notes weren't great, it wouldn't make any difference. If your words aren't good, if your notes aren't good, it doesn't matter, ultimately, in the long run, how correct your political sentiments are. It's not going to last.</p><p>The way that it's being judged now critically and academically has almost entirely to do with politics, and I don't like that. I've written pieces that have to do with contemporary politics and pieces that don't, and I think they're both perfectly valid. I'm not saying you shouldn't write about contemporary issues. I'm saying that that is not ultimately what makes a difference. What makes a difference is how good you are and how good your notes are.</p><p><strong>[01:02:23] Dan: </strong>You've been in the artistic creative community for a while now, and presumably have seen a little bit of shift in the zeitgeist over time, and have a feel for it. What is your prediction for the next 10 to 20 years? Do you think we'll get out of this rut, or is art sort of a political device?</p><p><strong>[01:02:40] Jan: </strong>If I'm right that we're in chaos right now, chaos isn't exactly a rut. It's an anti-rut. I have no predictions for 20. I really don't know. I don't think anybody would have predicted minimalism until it happened, and minimalism really changed music. Even though I don't like most minimal pieces, I think it did music a great service because it opened up the whole range of musical possibilities as something you could do and to be taken seriously for. Other than the continuation of the anarchy that we're living in now in arts and maybe in politics too, for all, it seems more and more, I can't see--Let's put it this way. I can't see a unified technique of composition coming along that will become standard. I don't think that will ever happen again.</p><p>I don't see a unified aesthetic coming along, which a great many artists adhere to. I don't think that'll ever happen again, though that is maybe more possible than a technical thing. Other than that, I can't predict. I read a thing once that's really influenced me a lot. It said they analyzed a bunch of prophecies, a great many of them. This was a study, and they found there was no difference between prophecy and random. Prophecy was random. It had nothing very solid to do with-- Then who would have predicted that Donald Trump would become president of the United States? That idea would have been so ridiculous, and yet here we are.</p><p><strong>[01:04:21] Dan: </strong>Even when I look back like a year in my life, sometimes I'm shocked by how many things happen that you're like, "There's just no way you could have ever seen it coming." Your personal life, what's going on in the world, it doesn't matter. It's just so hard to get a grasp on it.</p><p><strong>[01:04:34] Jan: </strong>Somebody said once, "The future is old-fashioned. We've learned to try to predict it." The movie <em>2001</em> technologically could have very easily happened. I'm not talking about the slab. I'm just talking about the technology to get to Jupiter or whatever, or to have bases on the moon. We just didn't go that way. That's all.</p><p><strong>[01:04:59] Dan: </strong>A couple of questions on your pieces of composition. <em>They That Mourn</em> I was listening to it on YouTube this morning, and I highly recommend, I'll link to it in the show notes, but I'm just curious, who are your biggest influences on this piece?</p><p><strong>[01:05:10] Jan: </strong>Great question. That's not what I've always said is my best piece for quite a while, and certainly my best performance, but I've got one coming out on piano with Adam Golka that is going to be in that league, but I haven't heard it yet. On <em>Mourn</em>, it's in the vein of music I've been writing for a decade or two before that, so there was that. It was a commission piece. I just started it as a piece, and I was working on the beginning when 9/11 happened. I said, "Don't let that get in the music. It's too big. It's too early. Don't do that," but it just took over the piece. I couldn't help it. In fact, the beginning of the piece was absolutely appropriate to nail a giant piece.</p><p>The biggest influence was an immediate response to 9/11. Musically, I'm sure there's some problems in it. I think some people would call that a neo-romantic piece. I don't admit to that. [laughs] I've been called neo-romantic, but to me, I'm no more that than neoclassic and neo-Balinese, and Neo-Indian, there are a lot of influences in my music, and neo-blues for that matter. I think jazz and blues have had a lot of influence in my music. There's some kind of rhythm and bluesy stuff in the piece, but that's there to evoke the city. I imagine the beginning of this piece as if on a normal day going to work, you're driving down the road and you're hearing stuff, and some of that stuff is chaotic and maybe you go past a radio and that's playing something. That was my image for the early part of the piece, and then there's a catastrophe.</p><p>I mean, part of the idea, a lot of part of the piece is that mourning is both public and private, and there are a number of feelings involved in mourning, some of which is fear, and some of which is rage, and some of which is sorrow. All those are part of mourning. Also, the public part has to do with maybe religion and maybe hymns, the piece ends very much with-- A Jewish friend of mine was listening to it, and he said, "My god, that's a cantor," and I said, "Yes, that's what it is," because that's what seemed to me, right for the end of the piece, that kind of emotionalism, but again, in a formal sense.</p><p>I have to say, I heard a cantor on radio once in my life, maybe 30 years ago. I've never heard a cantor since, but it made a big impression on me, obviously. The influences are things like that rather than classical music as such.</p><p><strong>[01:07:34] Dan: </strong>What about this new piece? You're writing it at 77, that's impressive in itself, but what makes you so confident that it's one of your best? Tell us about the influences.</p><p><strong>[01:07:42] Jan: </strong>It goes back to ideas I was working with already in my 20s. Yet, what I've been saying for years about my music is that I wanted to reclaim the wildness of the music in my early 20s. I wrote 100-page one ensemble piece that is just absolutely waves of notes all the time. [laughs] I guess technically, call it a '60s, '70s tone-colored piece. At a certain point after that, I said, "I've got only a certain number of notes in my life, and I'm using them up too fast." I cut back, and I said, "Oh, I just want to be expressive. Whatever I do, I'll be expressive." I really began to write a much more contained, and in some ways, music more related to tradition, more overtly. These are the pieces that some have called neo-romantic, though they're also very much involved with Beethoven, and Brahms and Balinese music, and on and on, and jazz and blues.</p><p>I've said that I want to get back to that wildness, but yet know what I'm doing, and that's exactly what I did in this piece. I just told myself that that's what I wanted to do, and then I waited to see what happened, and what happened was this piece in which I used these ideas that go back to my early 20s, but they're much, much more controlled and much, much more aware of what they're actually going to sound like when they're played. I'm writing big masses of sounds sometimes, but I'm aware of what's going to happen when this happens. I learned a very great deal from Beethoven's sketches.</p><p>One of the things you learned from Beethoven's sketches is to let it be bad because what Beethoven did, very often in the sketches you would see a patch that is just dismal, and he knew it, and he'd go back and change it. In other words, a lot of very great music starts off as bad music. I had to fully absorb. I think when I was younger, I thought music was either all good or all bad. No, that's not how it works. I have more control as a composer, and that's one of the reasons it's my best piece. Even though I'm using ideas that I worked with before, it's in a whole different context, and this piece just doesn't resemble anything else.</p><p>Yet, even though, to me it is a piece that sounds unique, it is very much its own voice and its own world, and that's hard to do. I was finding harmonies that I'd never heard before even though technically, there is no harmony that's been there before. In a way, that harmonies are new. The way a piece is put together is new. I'm not somebody who puts newness and innovation on a pedestal. I think that's a virtue, but not the only virtue. That's one of the things about a lot of modernism that innovation was the only virtue, and I find that foolish too. It's a piece that doesn't sound like any other but still sounds like me and still sounds like a real personality.</p><p>I think it was headed toward the Frost's <em>Stopping by Woods on a Snowy Evening</em> that I did early on. That and the image of light autumn in New England, this beautiful gloominess after the leaves are gone, really influenced the feel of that piece. I find that nicely dark and eerie.</p><p><strong>[01:10:50] Dan: </strong>Well, I very, very much look forward to it. Please do send it to me.</p><p><strong>[01:10:54] Jan: </strong>Are you musically trained at all or just an enthusiast?</p><p><strong>[01:10:57] Dan: </strong>No. Just an enthusiast.</p><p><strong>[01:10:59] Jan: </strong>Well, people like you were what made 19th-century music, these enthusiastic amateurs whether they played or not. One of Brahms's great friends was a surgeon. Billroth is still very famous for his innovations in the 19th century with anesthesia and he was an amateur vielle player. He knew a lot about music. These were typical of-- One of the early great critical articles about Brahms was written by a doctor or actually, was a lawyer, I think, but these were amateurs who knew a lot about music, who were terrifically enthusiastic about it, and they were Brahms's audience. They were mostly politically liberal too. The political conservatives like Wagner and Bruckner. It's interesting because the conservatives liked the more radical composers and the politically advanced liked more conservative composers.</p><p><strong>[01:11:47] Dan: </strong>Yes. That's backwards. What I found crazy interesting I know is I don't know how easy it would've been to get into this without the internet and technology solo without some people I knew in my life who were there to teach me, but it is so easy on the internet. I mean, the amount of great recordings on YouTube, the amount of lectures that are available for free, you have Spotify which with endless music that you could just always dive into, there's forums where people are just chatting all day about this stuff. It's really crazy and it's really great, but--</p><p><strong>[01:12:18] Jan: </strong>I think the availability of music is absolutely fabulous, but it also has a problem. I talked about this at the end of the Brahms book. In Brahms's time, you didn't get to hear Beethoven's symphonies live very often. How you got to know them was somebody playing them on piano in the parlor or playing that four hands. To go here and actual get to go and hear Beethoven's Ninth, it was so special in the 19th century. To hear a really good performance, that's something that only happened to you a few times in your life. That's special and this was part of the meaning and value of the music. It's not special anymore. Whatever you want to listen to from hip-hop to Schoenberg to Beethoven, it's all just a click away online. I think, to a degree, it tends to make it less special. In fact, I know it does.</p><p>It's a double-edged sword is what I'm saying. The availability you were talking about is certainly tremendous. At the same time, it tends to all glob together because it's so easy. It's like the difference between climbing a mountain and driving up a mountain. I'm an old mountain climber, so I know the difference. It's the same mountain top, but it's not the same.</p><p><strong>[01:13:29] Dan: </strong>That's a very, very good analogy, I feel. Actually, I do have a couple of questions just on your writing. I found this really, really interesting which is in your Brahms book, there's a dispute, right? There's this topic where he claims that he was abused in brothels when he was a young man playing piano for them. I heard from your blog, anyways, that you had a friend that was on the Pulitzer Prize committee and they told you that you didn't have a chance at it because you took the viewpoint that Brahms was being truthful and that he was abused in these brothels.</p><p>My understanding, anyways, of this was that there's people within academia or from the committee who really truly believed that this was false and have a whole reputation and career based on the fact that this is false. What I was curious about is it seems to me that that fact is sort of a he said she said and you can look at the evidence and you made your conclusion, yet you have people with career-vested interests and having one thing be considered the truth. I'm just wondering if that influenced, actually, how you wrote your later books like your <em>Mozart</em> or <em>Beethoven</em>.</p><p><strong>[01:14:35] Jan: </strong>No.</p><p><strong>[01:14:36] Dan: </strong>Not at all?</p><p><strong>[01:14:37] Jan: </strong>Because I don't have any vested interest, I think. I follow the facts as best I can find. That doesn't matter to me one way or the other the issue of Brahms and the brothels which they were these waterfront establishments. They were a handy combination of restaurant, dance hall, and brothel. They were in the St. Pauli district. The St. Pauli Girl beer, the girl in the cover is one of those St. Pauli girls who were waitresses and they were multi-faceted. They were also prostitutes. Brahms said all his life that he was abused by these women when he was 12, 13. You don't makeup things like that. There's multiple testimonies to that.</p><p>A famous Brahmster, this colleague came along and said, "No, it never happened." Everybody figured the German had to be right because I came along as writing a Brahms when nobody had ever heard of me. The Brahmsians, I think to this day, don't take me seriously because I came out of nowhere. Every book I've written is like that. When I was writing about Ives, I called him leading American music specialist, so he said, "Who the hell are you?" When I finished the book, he vetted it for me. He gave me a lot of very good suggestions. Then he wrote to the publisher and said, "Don't publish this." He later relented, and I'm not going to say it was, but I don't worry about that.</p><p>By the way, the friend in the Pulitzer committee who told me that wasn't my friend after that. I think a certain amount of people agree with me now. If you're a historian biographer, you should not found your career on anything but what you believe to be true. My main opponent in this-- Brahms got slammed for the issue of the brothels more than any other single issue. In fact, that is the main issue that he got slammed for. Not by all the reviews but some of them. The main person who went after me about that is somebody who made a career of representing these ideas of Gert Hofmann, the German scholar who said it never happened. She founded her career as a Brahmsian on saying this never happened. What I'm saying is she shouldn't found a career on any thing like that, but what you believe to be true.</p><p>If I had been convinced by better facts, I would've changed the book. In fact, I did. I went back to Hofmann and I looked at it again and he convinced me that the Brahmses were not dirt poor, as poor as Brahms painted them as. Brahms did not lie, but he did exaggerate. He exaggerated the struggle, the financial struggles of his family. They were up and down, but basically, they were more or less bourgeois nor did they live in a slum when he was born which it was the old tradition. Hofmann convinced me he was right about that, that they weren't living in a slum and they were up and down but did okay. I changed that. I went back and changed all that. If he'd convinced me about the brothels, I would've changed that too with no problem at all because I don't care one way or the other as long as it's what I think is true.</p><p>The main reason I think it's true is because Brahms said it and you don't make up things about yourself. You don't make up a shameful and humiliating lie about yourself and repeat it your whole life, least of all Brahms who hated lies and lying.</p><p><strong>[01:18:02] Dan: </strong>Yes. Question on music versus literature. Bach, Mozart, Beethoven, you can disagree at the margin about whether they're the best three composers ever and people have, but I don't know of any other art form where it's so widely agreed upon that you'll get those three names.</p><p><strong>[01:18:18] Jan: </strong>Well, I don't know. You get Rembrandt and Vermeer and there are people who dominate at every period. Though, it's going to be different in the visual. It is different. Every art has its own thing. For example, romanticism happened to visual arts long before it happened in music.</p><p><strong>[01:18:33] Dan: </strong>Yes. I guess in literature, do you think of it as the same thing? That's where I was going to cut at because I think it's just different. You could have Shakespeare, Homer, Dante, Tolstoy just depending on what you're interested in, but it almost seems like in music, the God-likeness of Beethoven and Bach and Mozart, people will stick their nose up if you think someone is better than Bach. I don't see that quite as much as in literature as you do in music.</p><p><strong>[01:19:01] Jan: </strong>Or in visual. No. I think there was a sense in the 19th century that's like Beethoven just sucked all the air out of the room and what the hell do you do now? I mean, that certainly was human thought. I wonder if there have been any artists like that. Was Michelangelo like that? Do people tear their hair after Michelangelo and say, "Who can do anything after Michelangelo?" I've never heard that that happened. It is an interesting thing. The arts are different.</p><p>I think Schoenberg as a composer said, "I'm just doing the kind of thing Picasso would." Picasso and the other contemporary artists are doing, it's the same kind of thing, but somehow music is different. People are more accepting of abstraction, say, in the arts than they are of serialism in music. That's probably because of the difference between hearing things and seeing things somehow. I don't have any opinion about that, but they are different. It affects how fast things get accepted and find an audience. Things do change. I mean, Schoenberg is-- I think audiences are much more willing to listen to Schoenberg.</p><p>I heard Mitsuko Uchida got a standing ovation for the Schoenberg's piano concerto at the Boston Symphony, I don't know, 20, 25 years ago. When James Levine came in with Boston Symphony, he did a whole Schoenberger and Beethoven year and filled the house a certain amount of the time. I think film has helped a lot. There's a lot of wonderfully strange music in films, not always with horror movies, but sometimes, and I think it's changed people's hearing in a good way maybe. They're able to take in.</p><p><strong>[01:20:35] Dan: </strong>That's a good point. This kind of comes back to the internet thing too as well. I think there's more accessibility. It's easier to go on the internet and read someone raving about Scherenberg on a YouTube video and explaining to why it's good than it is to pick up a record if you don't have any background to put it on and say, "What the heck is this?" Maybe there's something there.</p><p><strong>[01:20:55] Jan: </strong>These forums that you probably know and I don't. Where people really talk about this stuff all the time, and I don't tend to keep up with those forums. What I do is go on to Amazon, and I periodically Google myself just to see at random what people out there are saying. Other than that, I don't -- you've been influenced by people talking about these things, and I probably should go into that more. It's very interesting. Are you talking about things like Reddit and things like that?</p><p><strong>[01:21:25] Dan: </strong>Yes. There's a Reddit Classical Music page, and then there's a site called Talk Classical. Both of them are very similar, but it's just people talking about classical music pretty much every day.</p><p><strong>[01:21:35] Jan: </strong>I think it's a great thing.</p><p><strong>[01:21:38] Dan: </strong>It makes it super accessible if you don't have people that are directly in your life that are really into it, right? It's great. You mentioned that when you wrote your first two books, people said, "Who are you?" I think your <em>Beethoven</em> and your <em>Motzart</em> as just like a layman going on the internet and saying, "What biographies of these composers should I read?" you're pretty much the first one that comes up. That has changed in time. Those two are both top recommendations if you search around.</p><p><strong>[01:22:05] Jan: </strong>I'm really a very lazy person but I'm dogged as hell. I just kept doing it and waited for the dollars to flow in and they never did. The only book I've ever been paid reasonably well for was the<em> Mozart.</em> The <em>Ives</em> I was paid $15,000 and the <em>Brahms</em> I was paid a lot more than that. After all was said and done, I had about $60,000 for three years of work.</p><p><strong>[01:22:30] Dan: </strong>I can only imagine how much the <em>Beethoven</em> and <em>Mozart</em> took to go through all that source material.</p><p><strong>[01:22:35] Jan: </strong>The <em>Beethoven</em> was incredible, the <em>Mozart</em>-- Well, I told people my Charles Ives library is about two bookcases, my Brahms library is about three bookshelves, my Beethoven library is about two bookcases. The amount of writing in Beethoven was overwhelming. Mozart was one skinny bookcase, so it seemed relatively easy by comparison.</p><p><strong>[01:23:07] Dan: </strong>I heard you mentioned in an interview that you believe there are three great spiritual forces. Science, religion, and art. It seems that public perception has turned a bit negative on actually all three over the last few decades. Are you optimistic or pessimistic on science, religion, and art in the 21st century?</p><p><strong>[01:23:24] Jan: </strong>There's a certain antiscience movement, but it's not going to change science. It's not going to destroy science, and nothing's going to destroy religion either. I don't think anything is going to destroy art, though I'm more concerned about art in some ways than the others. The reason I say those three things is that, to me, science is about what's possible to know within the limited means at our disposal, which is a lot, but it's still very limited. In other words, good science, you limit yourself to the scientific method and what our instruments can measure and to what can be seen and measured, but that leaves a great deal of things out.</p><p>Religion, to me, is about what can't be known. It's about things that will always be beyond us, and that's why faith is involved. To mix religion and science is a disaster for both. To say that the Bible is scientific is just total nonsense because it's not what it's for. Art is somewhere in between those. Art is about emotion, and it makes use sometimes of science. It makes sometimes use of religion. It makes use of spirituality. It's the emotional inwardness in the human spirit that animates art, and it draws on those other two. In a way, I see it in between those two in a very important way. The trouble is a lot of artists not trying to do that anymore. It's not trying to exalt the human spirit as Wagner put it as the main goal of art.</p><p>It's trying to make a buck and to be famous. That's why I dislike Andy Warhol intensely for Andy Warhol, art was something that made you rich and famous. That's what it was about. It was a con game in that direction, and I don't think that's what art is about or should be. A great deal of art these days is because, especially in the visual arts, you make so much money. All you have to do is convince people that you're important and the millions can start rolling in. It doesn't have anything to do with whether you make people's lives any better, or whether you amuse them or exalt them, or-- Doesn't matter because you get the bucks anyway. In the case of academia, if you can get a job at a school, you can be completely indifferent to the effect of your music on people and still get tenure and have a pretty nice life.</p><p><strong>[01:25:33] Dan: </strong>Well, Jan, you've been extremely, extremely generous with time. I had just a blast with this one, honestly.</p><p><strong>[01:25:39] Jan: </strong>I did too. I did too.</p><p><strong>[01:25:41] Dan: </strong>Thank you so much for your time. I really appreciate it.</p><p><strong>[01:25:43] Jan: </strong>I always appreciate running on about these issues because they're terrifically important to me.</p>]]></content:encoded></item><item><title><![CDATA[Eli Dourado]]></title><description><![CDATA[Hard tech, growth, societal collapse]]></description><link>https://www.danschulz.co/p/eli-dourado</link><guid isPermaLink="false">https://www.danschulz.co/p/eli-dourado</guid><dc:creator><![CDATA[Dan]]></dc:creator><pubDate>Thu, 06 Jun 2024 10:40:42 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/145369587/042f0f8cd10664c51254e2944d91b9cb.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p></p><div id="youtube2-lN1edtVOxaw" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;lN1edtVOxaw&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/lN1edtVOxaw?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8a9d7d879ccef17eb7a8226151&quot;,&quot;title&quot;:&quot;Eli Dourado&quot;,&quot;subtitle&quot;:&quot;Dan Schulz&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/5xA4HbCP0zbWkw9F3hK9KT&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/5xA4HbCP0zbWkw9F3hK9KT" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/eli-dourado/id1693303954?i=1000658030399&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000658030399.jpg&quot;,&quot;title&quot;:&quot;Eli Dourado&quot;,&quot;podcastTitle&quot;:&quot;Undertone&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:4327000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/eli-dourado/id1693303954?i=1000658030399&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2024-06-06T08:30:00Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/eli-dourado/id1693303954?i=1000658030399" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><h3>Timestamps</h3><p>(0:00:00) Intro</p><p>(0:00:43) Non-TFP econ metrics</p><p>(0:04:59) Are ideas getting harder to find?</p><p>(0:06:52) Economics as a field</p><p>(0:10:43) Tyler Cowen&#8217;s influence</p><p>(0:13:00) Culture and economic growth</p><p>(0:16:13) Uniqueness of the United States</p><p>(0:19:43) Education</p><p>(0:23:54) Revisiting &#8220;notes on technology&#8221;</p><p>(0:37:28) Investing in hard-tech</p><p>(0:44:30) Government&#8217;s role in tech</p><p>(0:49:22) Talent</p><p>(0:54:34) NEPA</p><p>(0:56:56) AI</p><p>(1:01:20) Societal collapse</p><p>(1:09:54) Advice to grow the economy</p><h3>Links</h3><ul><li><p><a href="https://x.com/elidourado">Follow Eli on X</a></p></li><li><p><a href="https://www.elidourado.com/">Eli's writing</a></p></li><li><p><a href="https://twitter.com/dnschlz">&#8288;&#8288;&#8288;&#8288;&#8288;Follow Dan on X&#8288;&#8288;&#8288;&#8288;&#8288;</a></p></li><li><p>Subscribe to Undertone on&nbsp;<a href="https://www.youtube.com/channel/UCtRjQ8EA3Via7KXzv9F173Q">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;YouTube&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>,&nbsp;<a href="https://podcasts.apple.com/us/podcast/the-world-of-yesterday/id1693303954">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Apple&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>,&nbsp;<a href="https://open.spotify.com/show/59YkrYwjAgiKAVMNGWPaLE">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Spotify&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>, or&nbsp;<a href="https://www.danschulz.co/">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Substack&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a></p></li><li><p>Watch or listen to previous episodes of Undertone with <a href="https://www.danschulz.co/p/tyler-cowen">&#8288;&#8288;&#8288;&#8288;Tyler Cowen&#8288;&#8288;&#8288;&#8288;</a>, <a href="https://www.danschulz.co/p/vitalik-buterin">&#8288;&#8288;&#8288;&#8288;Vitalik Buterin&#8288;&#8288;&#8288;&#8288;</a>, <a href="https://www.danschulz.co/p/scott-sumner">&#8288;&#8288;&#8288;&#8288;Scott Sumner&#8288;&#8288;&#8288;&#8288;</a>, <a href="https://www.danschulz.co/p/samo-burja">&#8288;&#8288;&#8288;&#8288;Samo Burja&#8288;&#8288;&#8288;&#8288;</a>, <a href="https://www.danschulz.co/p/3-steve-hsu">&#8288;&#8288;&#8288;&#8288;Steve Hsu&#8288;&#8288;&#8288;&#8288;</a>, and <a href="https://www.danschulz.co/">&#8288;&#8288;&#8288;&#8288;more&#8288;&#8288;&#8288;&#8288;</a></p></li><li><p>I love hearing from listeners. Email me any time at dan@danschulz.co</p></li><li><p>Share anonymous feedback on the podcast: <a href="https://forms.gle/w7LXMCXhdJ5Q2eB9A">&#8288;&#8288;&#8288;&#8288;&#8288;https://forms.gle/w7LXMCXhdJ5Q2eB9A</a></p></li></ul><h3>Transcript</h3><p><strong>[00:00:09] Dan: </strong>Welcome. I assume that many listeners on the show are familiar with the fact that American economic growth started to stagnate starting in 1973. This is probably one of the most discussed topics in tech and econ circles, and my guest today is an expert on the issue. Eli Dourado is the chief economist at the Abundance Institute and one of the most interesting thinkers and investors in hard tech. We talk about whether ideas are getting harder to find, culture's influence on economic growth, and Joseph Tainter's theory of societal collapse. I hope you enjoy it. Let's jump right in. </p><p>All right. I'm here today with Eli Dourado. Eli, welcome.</p><p><strong>[00:00:41] Eli Dourado: </strong>Dan, it's great to be here.</p><p><strong>[00:00:43] Dan: </strong>Great. First question, what do you think is the most important metric you think about outside of total factor productivity? Maybe another framing of this would be is there anything that you would not sacrifice for us to achieve 2% total factor productivity?</p><p><strong>[00:00:58] Eli: </strong>Yes. Much more important than economic growth actually is human rights and other kinds of just decency well-being type things that I would place that higher than economic growth on my hierarchy of needs. That does matter more to me. I think we've attained a reasonable level of that. I think they're not really in conflict with each other. At the margin, the more important thing for most US-based people is economic growth, but I definitely wouldn't want to sacrifice the more civic qualities that we already have. It could improve, but it seems good enough. At the margin, I think it's more important to have economic growth, but I wouldn't sacrifice those basic freedoms, freedom of speech, et cetera.</p><p><strong>[00:01:57] Dan: </strong>Yes. Yes. That makes total sense. What about something quantitative, specifically from economics? Are there any economic indicators that you think, "Hey, if we're going to shoot for 2% TFP growth, we should be watching this one and make sure it doesn't get impacted,"?</p><p><strong>[00:02:10] Eli: </strong>Yes. Sometimes people talk about inequality and they think of growth and inequality as being at odds. I happen to think that's not true at all. If you look at the data, inequality has been increasing in the low TFP growth era. You see massive inequality in TFP growth by industry, which implies also by geography. We have a huge tech boom in San Francisco. People in Arkansas are not necessarily doing so well from that and the remedy is to increase TFP growth across all industries. That actually improves inequality, but I would worry if we were getting some sort of boom that was leaving people behind. That would concern me significantly.</p><p>Yes, you don't want to have growth without it being broad-based or with it being less broad-based than you were before. If it was the case that we were getting huge TFP growth, but say 20% of the population was just worse off than before, that would trouble me quite a lot.</p><p><strong>[00:03:29] Dan: </strong>What's the right way to think about inequality? Let's say that the wealth of everybody goes up, including at the bottom quartile as well, but inequality increases. Is that a good thing, is that a bad thing, how do we think about that?</p><p><strong>[00:03:40] Eli: </strong>It's fine on its own if everyone's going up. The thing that worries me is political stability. This is something I even wrote about in my dissertation. People have to feel like they're getting part of the surplus of being members of society, being part of society. If they don't, then that's when you start getting people getting concerned about-- or you start raising, for me, concerns that people might defect from the system. That can take a number of forms. Apathy or just outright hostility to the existing system. That's actually something I worry that we're going through today is we haven't delivered growth, we definitely haven't delivered growth in all parts of the country and all parts of income spectrum in all industries and so on.</p><p>There's a fraction of the population, I don't know how big it is, that doesn't care about our society, about the common good and that worries me. I think of that as a reason to have more growth. I don't think of it as at odds with broad-based TFP growth, but I can imagine a scenario where they are at odds and that would concern me.</p><p><strong>[00:04:59] Dan: </strong>Got it. Got it. Got it. Our idea is actually getting harder to find.</p><p><strong>[00:05:03] Eli: </strong>Okay. I think you probably know my general take is that whether or not they're getting harder to find, that's not the reason why we're stagnating. I'll just preface it with that. I think that they are getting harder to find in some fields and easier to find in others. Fundamental physics, it's probably getting harder to find. There still is new physics to discover, but it's big teams and billion-dollar pieces of equipment and so on that we need to make progress on that. In other areas, say biology, is the one that I feel most strongly about. If you think about what a graduate student in biology, by themselves, in a normally equipped lab is able to do today versus the best teams of well-funded senior biologists could do 50 years ago, the grad student today can do more, can sequence DNA, can edit DNA, can create, new synthetic organisms, et cetera.</p><p>In biology, ideas are getting easier to find. In physics, they're getting harder to find. It just depends on the tools that are being developed, the new levels of abstraction that they're generating. New tools that serve as platforms that are a new abstraction, you don't actually have to learn about how to do a million things to unlock the new capability, those are areas where it's getting easier to find. Like software too, computer programmers do not need to know how a computer works today for the most part. At least most computer programmers don't need to know that. You abstract from all that burden of knowledge and it's easier than ever to make progress for them.</p><p><strong>[00:06:53] Dan: </strong>Economics is a field. I think many people have a feeling that it's gotten a lot weirder over the past few years. COVID broke a lot of people's general models of how the world should work. I'm just curious what you think of the field as economics. You view it in higher or lower regard than you did pre-COVID.</p><p><strong>[00:07:09] Eli: </strong>I've been on this interesting path of being trained as an economist and coming to identify less and less with the field over time. When you're in grad school, you're getting a PhD or whatever, you really want to be accepted as part of the field or whatever by your peers and by the senior members of the field. More and more, I just, "Don't put labels on it," and just, "I'm going to do what I find most interesting. Whether or not it's economics or whatever is irrelevant to me." I'm not very self-conscious about the field anymore. I'm curious what do you think has gotten weirder about it? Give me your case.</p><p><strong>[00:07:48] Dan: </strong>Probably the most salient one is just inflation and government spending during COVID. Just basic models of how the world should work, it just seemed like nobody was very good at all at predicting how the economy would behave. Housing has been very weird. There's several of these things where I think the markets are just behaving different than people would have expected.</p><p><strong>[00:08:08] Eli: </strong>Okay. I was thinking you meant more academic economics, the profession with a capital P.</p><p><strong>[00:08:16] Dan: </strong>That works too. Yes, that's interesting as well.</p><p><strong>[00:08:19] Eli: </strong>I think that the thing that has been underrated by economists sort of doing forecasting is demographics. The problem, say 20 years ago and for a few years before, was the global savings glut. You had very low interest rates and so on. That's an artifact of the baby boom and the fact you had a high percentage of the population was in peak earning years and also peak saving years. If you're earning a lot and saving a lot, that means you're producing a lot, you're not consuming as many goods, so there's this glut of goods on the market and everyone wants to save. That's going to be a period with low interest rates because it's penalizing saving.</p><p>Now, we're through that. If you think about these multi-decade-long trends when the baby boomers retire, that shifts the trend pretty significantly because all of a sudden, the big demographic hump is no longer producing a lot and saving a lot. They're producing zero and just saving. They're spending down their savings. You'd predict that that would have a big impact on interest rates and inflation and we'd no longer be at this point where you could just keep interest rates low forever and not get inflation out of it. I think that that may be what's going on, but, A, I'm not 100% confident that that's the main explanation. B, very few people were building that into models.</p><p>C, you're right, it's crazy, if this keeps going, the amount of the federal budget deficit and the amount of increased expense that we're going to have supporting old people over the next 10 years or longer, people are losing their minds, and I don't see how we can support it. Then again, if you had told me the path that we would be on in certain deficits 20 years ago, I would have said, "That's not sustainable. That can't happen." We've been continually surprised by the amount of the deficits.</p><p><strong>[00:10:44] Dan: </strong>We're talking a bit about your background in academia and I understand, anyways, Tyler Cowen was your PhD advisor.</p><p><strong>[00:10:50] Eli: </strong>Yes.</p><p><strong>[00:10:51] Dan: </strong>Where did he most influence you?</p><p><strong>[00:10:53] Eli: </strong>Oh, gosh. So many ways. I think the number one thing is having multiple models of the world in your head at the same time and having a portfolio approach to thinking about the world, not holding any one of them too strongly, being able to see multiple sides of a question. It sounds basic, but a lot of people don't do that, and Tyler's very good at asking the right question to make you do that, be able to explain all the sides and actually see them as having some value. Even if it's your minority position or whatever in your portfolio in your head, it has some value and it has some explanatory power and you don't want to dismiss it. That open-mindedness is really important.</p><p>The value of studying widely, of bringing in models from different parts of economics and from outside of economics, and seeing the applicability of those models to the question that you're interested in. Tyler, a lot of his early work was influenced by finance models. He was thinking about Fischer Black and stuff like that. Of course, I've read all the Fischer Black stuff and it became influential for me as well, but thinking about, "Okay, how does this influence questions of macroeconomics if you have a finance underpinning behind your thinking?" Then just completely other fields that you got to go think about in terms of those different inputs. I would say those are some ways. Plus, he's just a very kind and very-- he's a student of human nature. If you're working with him, he will study you. He will develop a model of who you are. He's, like I said, very kind. Always had something constructive and helpful to say.</p><p><strong>[00:13:01] Dan: </strong>Yes. that last piece, the older I get, I feel the more important I realize that is in whoever you work with.</p><p><strong>[00:13:06] Eli: </strong>Yes, exactly.</p><p><strong>[00:13:07] Dan: </strong>That's great. We're going to talk a bit about just technology and regulations as drivers of economic growth. The one driver I wanted to ask you about is culture. How much weight do you place on cultural factors in driving economic growth?</p><p><strong>[00:13:23] Eli: </strong>Certainly, it has an impact and I don't know how to operationalize it. For one, culture seems to be influenced a lot by material factors, so economic conditions. Even the style of music, you have economic growth in the '80s, you have poppy, happy music. Then there's a depression or recession in the early '90s and you get grunge. I think there's some inputs there. The other piece of this that you could look at is we were growing pretty fast from, let's say, 1920 to through the early 1970s. I do think part of the thing that led to stagnation in '73 or so was that by the early to mid-1960s, we started to see some of the economic growth. Culturally, we started to, A, take it for granted, and B, view it as somewhat fake.</p><p>Even Ralph Nader's like, "Oh, it's unsafe." It's like, "Yes, we're growing, but it was unsafe." He didn't probably say it this way, but I would say, "Once you account for the unsafe, if you factor that in that economic value, we're not growing as fast as we think we are." Then you have Rachel Carson, <em>Silent Spring</em>. This is another way in which the growth is fake because we're not accounting for the decrease in biodiversity or in environmental amenities that we have. You see if the culture starts to see, has maybe even some valid critiques of the growth that you're having, there's just going to be a backlash and a push for increased attention on those parts of those parts of the notional growth that they say you're not getting. I think that's maybe what happened.</p><p>I don't know how to measure culture or-- so multidimensional. It's not just more or less culture. It's culture in all these different dimensions. Certainly, if you look at internationally, cross-sectional, across countries, the US has a certain culture that does seem to be well-suited to entrepreneurship and other kinds of activity that is conducive to us being at the frontier. Whereas other cultures are either more inherently pessimistic or more deeply cynical or something, that makes them not well-suited to that.</p><p><strong>[00:16:14] Dan: </strong>Yes. I actually had this question ahead of time. It's good you led to this, which is why has there only been one United States and it's not like the US is perfect. If you look at human history, our story of growth looks probably very different without the US and it seems to be an outlier. It's not totally clear that someone else would have stepped in to take the place in the same way that we did. That almost leads you to believe that there's either a positive growth sentiment is not part of human nature or corruption is a part of human nature, all of these things that the majority of countries deal heavily with. Why do you think there was only one?</p><p><strong>[00:16:46] Eli: </strong>I think the US benefited a lot from geographic isolation, being so far away from any threats from peer countries. The history of Europe is they're all invading each other all the time. They develop in that way. That means they do need more centralized government. When you have a wartime footing, you need centralized command and control. Maybe you don't, but let's at least go with that. You probably need more centralized command and control than you would otherwise, at least. The US hasn't had that need because there hasn't been a real threat of invasion. It's been less centralized command and control. Parts of the American West, for most of American history, have been ungovernable basically from Washington.</p><p>Tyler has this paper about does technology drive the growth of government. Without computers or whatever, you wouldn't be able to keep all the records that you would need to have a large administrative state. Just seeing the US as for most of its history is basically ungovernable, created a unique culture that valued the individual and the contributions of the individual. Whereas in Germany, okay, they, have these train systems because they have to be able to move war material to the front lines or whatever, the US is just, "We can't control what happens in most of our country." I think that's a big part of it is that when you have centralized control, it's much higher variance.</p><p>You can have really good outcomes that come from central direction, but you can also have these disastrous centralized policies. The US just taking a more decentralized governance path has just chugged along at a more or less constant rate without any major setbacks. That, I think, maybe has contributed to it. The other thing is that there is a path dependence here. There's some recent papers came out I guess last year, NBER working papers on zero-sum thinking. The conclusion of the paper is, "Okay, positive-sum attitudes and thinking is a causal factor in economic growth, but the causation also runs the other way." If you grow up in a stagnant or declining economic environment, you're more likely to be a zero-sum thinker, because if in fact the economy is zero-sum and not growing, then you're going to be a zero-sum thinker. Once you're locked into those thinking patterns, it's hard socially to get out of them.</p><p>Anyway, I don't know if that's a complete answer to your question, but those are where my mind goes when you ask it.</p><p><strong>[00:19:43] Dan: </strong>Yes. I'm curious about education. How important do you think education policy is for the eventual goal of increasing total factor productivity? It strikes me that it's now becoming pretty easy to teach yourself very high levels of any technical thing you want to learn online and you could just meet people online and get engaged in communities. It's a lot easier. Is education policy going to be more or less relevant over the next few years?</p><p><strong>[00:20:08] Eli: </strong>The funny thing about education is it probably hasn't been about conveying information for a long time. Even say 50 years ago, you could get a library card and go down to the library and get any book you wanted. Once you learn to read at least, you could, in principle, go down and read the books, read the journal articles even, and teach yourself anything you want to know. I think a lot of education is really about motivating students. Coaching them, keeping their interest levels up. Another important component of education, of course, is babysitting. The reason we have schools in large part is because parents have to work.</p><p>The other finding that I get, and I think it's pretty robust, is that we're not necessarily teaching in the most effective manner. The most effective manner is tutorial style with mastery-based learning. If you had something that, you could get maybe much better outcomes. We're not, in general, doing a good job. The students are, for the most part, meeting grade level expectations. If you then go back and survey American adults, something like over half of US adults, I think 53% was the latest data, don't read at a sixth-grade level, which means they can read multiple texts and compare and contrast and compare the sources and draw conclusions from the different things that they read in the multiple sources.</p><p>If you're thinking that's the standard, everyone should read at the sixth-grade level, we're not doing it. We're not illiterate, we can read, but we can't-- Obviously, students are attaining those standards as they're going through school, but then once they're out of school, it goes away. One thing I've been thinking about lately is to what extent can AI tools, LLMs, et cetera, make the mastery-based system more accessible for everybody. If you have an LLM that is coaching you, that is trying to keep you engaged, and it's always at your level, giving you material that is appropriate for what you already know, reviewing what you already know at a space interval, making sure that you retain it and is fun and engaging in that way--</p><p>You could imagine a inversion of the school where you have some tablet or whatever that does the teaching. Then schools are still going to have a babysitting function, so you still need human teachers there most of the time, but they don't have to be specialists in how to convey certain information. They can be there to meet the child's emotional needs to make sure fights don't break out, care for the child. It's a different person that you would hire to be a teacher in that world, but you still need teachers. My question for the next 10 years of education policy is to what extent can we realize the dream of-- Neal Stephenson wrote about the <em>Young Lady's Illustrated Primer</em> in<em> The Diamond Age</em>, and if you could have something like that available to all students and then decouple the teacher figure from actual doing education, I think that would be maybe the best way forward.</p><p>Education matters, but it's not so much how effective are you at communicating the information to children, but how good are you at fostering a level of learning and creating an association in the children that doesn't make them want to graduate and then never read again.</p><p><strong>[00:23:54] Dan: </strong>I want to revisit your 2020 posts on notes on technology in the 2020s. You go down a list of a bunch of different areas and talk about potential technologies that might push total factor productivity up to that 2% rate. Just broadly speaking, before we dive into any specific area, do you feel more or less confident, sitting here in 2024, that this is an achievable goal than you did when you wrote this in 2020?</p><p><strong>[00:24:23] Eli: </strong>In terms of the goal is to increase TFP, I think it's very achievable. I don't think all the detail I had in that post was right by any means, but I feel it's still very much achievable.</p><p><strong>[00:24:35] Dan: </strong>What I'd like to do is go through each one and just see if you have-- Do you have any interesting updates off the top of your head in the last it's four years? In the last couple of years, have there been any interesting breakthroughs that are worth noting?</p><p><strong>[00:24:47] Eli: </strong>Yes. One of them, off the top of my head, I think I was pretty bearish on battery improvements in the post. Today, I'm way more bullish on it. I ended up making a battery investment. That's why.</p><p><strong>[00:25:03] Dan: </strong>What changed there? Why are you more bullish now?</p><p><strong>[00:25:06] Eli: </strong>Better understanding of the bottlenecks in batteries and actually seeing the-- There's a company that I invested in that it's going to smash that bottleneck. What I realized is the bottleneck is cathodes. In most battery startups or whatever are working on better anodes, which is not the bottleneck, but cathodes, we only have two in lithium-ion. We have the nickel manganese cobalt cathodes, and those are higher density than the other one and a bit more expensive. Then there's lithium ferrophosphate. It's just iron-based cathode and those are cheaper but less dense. Those are the two paths forward.</p><p>We are seeing the prices of battery cells start to stall out in the sense that, for a long time, we had 20% annual cost improvements, and now, we have 5% annual cost improvements over the last 5 or so years. That is stalling, but I think a new cathode that is denser and cheaper will just be a game changer. I invested in this company called Ouros and I think they have that and they've already tested it in single-layer of cells and confirmed it is much denser and it is also much cheaper than NMC or LFP. That's one way I got wrong and actually made a bet on the other side of it.</p><p><strong>[00:26:40] Dan: </strong>Okay. Actually, another bet you made was on space. You bet Robin Hanson $100 against his $200. Your bet was that a human would set foot on Mars by the end of Q1, 2030. Now sitting here, four years later, would you double down or are you less confident?</p><p><strong>[00:26:55] Eli: </strong>No, I think I'm going to lose.</p><p><strong>[00:26:57] Dan: </strong>You think you're going to lose? Okay.</p><p><strong>[00:26:57] Eli: </strong>To be fair, I got two-to-one odds. I told Robin I was 40% confident. He offered me two-to-one odds, which was generous. I am less confident now only because I think the schedule will slip. I do think Starship will enable that to happen at some point, but it might be four years later or something like that.</p><p><strong>[00:27:19] Dan: </strong>Okay. What about in biotech and health? The main things you talked about there were mRNA vaccines and DeepMind's protein folding. Both of these strike me as more technology breakthroughs that were waiting to be commercialized. Do you have any updates off the top of your head on either of those that make you either more or less bullish?</p><p><strong>[00:27:36] Eli: </strong>I've been thinking a lot about gene therapy in the last couple of weeks, and related to mRNA but not only mRNA, the biomolecular kinds of treatments. I just think the potential is just so enormous. My concern right now is that the regulatory system is not right for those kinds of breakthroughs. When you have these biomolecular treatments, they're big molecules. The existing drug approval system is designed for small molecules in terms of even preclinical testing for toxicity. Toxicity in small molecules happens because the molecule accumulates in some organ or something that and causes damage.</p><p>What they do is you put this in a mouse and pump up the-- It's actually cruel. We pump up the doses so high that we determine what the LD50 is, what the lethal dose is that 50% of the mice die. Then you make sure that the human dose, on a per kilogram basis, is way lower than that when you do it in human trials. Just like with these molecules that are bigger, they're more the natural language of our existing cells, there's not really that same concern. The immune system will clear it automatically because it's used to dealing with molecules of that size if it is too much. To the point that we've had to redefine toxicity for biologics. It doesn't even mean the same thing, but they still require toxicity testing.</p><p>I worry a lot about that. I'm concerned that we won't reap all the benefits of those breakthroughs, but I think they're very, very significant. You could have a different system where maybe you regulate the vector or the platform, how you deliver the genetic payload to the cell, but maybe you don't regulate the exact payload and it's doctor's discretion. In the same way that a surgery, the doctors don't get every individual surgery FDA-approved even though they're completely unique and customized to the needs of the patient.</p><p>There was some positive news this week. Congress basically instructed FDA to come out with a way for way to identify these more genetic platforms, gene therapy and other biologic platforms. FDA has issued some draft guidance saying how they'll do it. I don't know enough about it to evaluate whether that's going to be sufficient or it doesn't go far enough or whatever, but I think is incredibly powerful. On the multi-decade timeline, it's going to be very important.</p><p><strong>[00:30:40] Dan: </strong>Okay. Let's talk about energy. The main topics you discussed were wind, solar, batteries we already talked about. The big one that I didn't know a lot about is geothermal. It seems nuclear gets a lot of attention right now because there's a bunch of policy disputes going on all over the world about, "Should we do nuclear? Should we not?" You get a bunch of people that take a hard stance on each side. You stated that geothermal has potential to maybe make nuclear not even relevant if it really works. What's been the latest on geothermal? Would you have any updates to the posts in the last couple of years?</p><p><strong>[00:31:11] Eli: </strong>Yes, we've had some next-generation test wells or actually even commercial projects. Fervo did a project with Google for a data center and the results were good. A lot of the model in terms of making geothermal economical is going to come down to drilling costs and drilling costs seem to be getting very low even with mechanical drilling. I made an investment in energy-based drilling, but my friend Austin Vernon keeps telling me, "Oh, mechanical drilling is also getting cheap." Just using regular drill bits and going through granite, there's a lot we could do to optimize those bits and, and the surrounding systems to go through granite. That means that we can get much deeper and get to hotter temperatures and so on.</p><p>Everything seems bullish where maybe some update is, I would say, just pure thermal energy, that seems the lowest-hanging fruit for geothermal. If you think about a paper mill, it runs 150 degrees C steam through pipes to help with the drying of a paper. That's low-temperature heat that geothermal can provide very easily. There's a bunch of industrial. Dairy farms, they need heat to pasteurize milk and stuff. That's something you could decarbonize very quickly with geothermal, potentially. Electric conversion, I think is still on the table very much, conversion to electricity.</p><p>One other thing that maybe I've updated on is I just think steam turbines suck. If you think about how much they're still important to us, this is a couple hundred-year-old technology. Coal plants, it's you burn this rock and you use that to make steam and then you use that to run a turbine. Geothermal is also using a turbine, nuclear is also putting neutrons to make heat to boil water, to run through a turbine, and even some fusion concepts, it's still using neutrons to create heat, to boil water, to spin a turbine. More and more I'm appreciating the benefits of solid-state stuff. A solar panel has no moving parts and it's easier to manufacture or you get more cost improvements in manufacturing at scale because there's no moving parts.</p><p>The holy grail as I'm thinking about it is it's something that is better for converting heat to electricity efficiently that is solid state and could be manufactured in a way similar to solar panels. There are some ideas for this. A lot of them maybe rely on higher temperatures, which you can't get to with, say, nuclear even, but if we could do something that, that would breathe life into a lot of thermal sources.</p><p><strong>[00:34:17] Dan: </strong>Got it. We'll move on from this post. You articulate really well this idea that just because you have a tech breakthrough doesn't really mean that it gets digested in the GDP numbers or is useful to anyone. What you end up really needing is you need someone to productize it and commercialize it and actually make it useful. Just as a thought experiment here, let's pretend that all tech progress on the breakthrough side paused tomorrow, but we still had our existing breakthroughs, so things like CRISPR or GPT-4 that haven't really been dispersed throughout the economy. Do we have enough to budge TFP today? How excited are you about the existing just science breakthroughs?</p><p><strong>[00:34:52] Eli: </strong>I think there's a lot. It is a lot of gains. As you may know, supersonics is near and dear to my heart. That's something we had 50 years ago. Concord's first flight prototype flight was in 1969. Bringing back supersonic flight, it would be a huge thing, but not even that. We have ramjets. We've had ramjet engines since I think the 1940s. I think the Soviets had one. Okay, we haven't commercialized that. You could go easily Mach 3, Mach 4 based on existing technology and if we could commercialize it. A lot of the energy stuff is about deployment at scale. Wind, solar, geothermal even.</p><p>It depends on what you want to count as a tech breakthrough because I think there is significant learning by doing. This is an important point is that a lot of the progress is made outside of the science lab, but it's still an innovation, it's still an idea, and it comes from deploying, having contact with the real world, seeing what goes wrong, iterating. If you think about why SpaceX and Boeing have such divergent paths. Well, Boeing just designs the product. Whether it's a rocket or capsule or an airplane, they design the product end-to-end and then build it and then fly it and expect it to work flawlessly, whereas SpaceX is, "Well, we're going to build this half-baked version. We're going to fly it. We're going to see where the weak points are, why it blows up. Then we're going to figure out what the problems are and address those in the next iteration."</p><p>Iteration speed is so important for development. I don't know if you count that as pure ideas or pure deployment, but it seems really important as attested by the fact that SpaceX has a higher valuation than Boeing's market cap with 1/10 of the employees. There's plenty of things. A lot of the genetic stuff, we don't need any fundamental breakthroughs to just deploy that at scale. You could cure a lot of illnesses. You would need maybe some innovation in the sense of designing the particular molecule for this condition, but no real breakthroughs are needed. You just need to actually design the stuff, administer it to people, see what goes wrong, adapt, and so on, and so on. I think there's plenty. Particularly if you count the more iterative aspect of it, I think there's plenty that we could do.</p><p><strong>[00:37:29] Dan: </strong>Got it. We don't have to make that trade-off, so that's great. I'm curious, as someone who goes really deep on all of these different technologies, but then you also invest, how do you think about understanding which tech breakthroughs are going to be able to be turned into a useful product? I feel like this is one of the really big questions of venture capital. I feel like in these types of domains, it's actually a little bit even more challenging than in a software business because you're trying to figure out, "What does it mean that we have the scientific breakthrough? Does it actually have a useful application in the real world?" For the investments that you make, how do you think about which technologies to bet on that you think will actually end up productizing?</p><p><strong>[00:38:08] Eli: </strong>You're not betting on a technology. Usually, you're betting on a product. If a founder comes to me and he's like, "I have this technology." First question is, "Well, okay, what's the product?" Don't invest if there's no product. If it's not a proposed product, I just wouldn't invest in it. If that product doesn't obviously have a big market, I wouldn't invest in it. If they can build this thing that they say that they can build, will it be a billion-dollar company or a multi-billion-dollar company? If the answer is no, then that's a screen right away to say no. Then a third one is just do I understand how this works? If you don't understand how it works, then I would say don't invest in this idea.</p><p>This is actually a big problem in hard tech VC is that there's a lot of VCs that are mainly software funders, but they've seen the success of SpaceX or whatever and they want to have a little bit of a hard tech portion of their portfolio or something, but they don't actually know enough to diligence the idea or they don't invest enough a lot of times. It is an unhealthy dynamic in venture where some people-- Peter Thiel says, "Okay, you want to be contrarian," and everyone's like, "Oh, yes, we want to be contrarian." What they actually, not Peter, but most other VCs, what they mean by that is they want to be six months ahead of the consensus. They want to make the bet now in terms of what will be the consensus belief six months from now. You'd get a big markup in six months because everyone comes to believe it. If that's your play, your object of study is not so much the company, as it is your peers in the industry. You're thinking about like, "What will these other firms believe in six months that I can bet on now that then they'll believe in it, and it'll be a markup for me and my investment?" Yes, just basically understanding at least just at a very basic level how does this technology or invention work is table stakes, I think.</p><p><strong>[00:40:25] Dan: </strong>When you invested in a deep tech company and you're looking at the founder, what's the most important skills that you look for? I'm assuming that this is different than you might look for in a software founder.</p><p><strong>[00:40:35] Eli: </strong>In a hardware founder, it's really important to know the field really well, know the industry really well. In a software founder, it's like, "Does he need to have deep experience across the software industry and know what the trends are and know what these technology stacks are or whatever?" In a hardware founder, they need to know their industry cult, I think. They need to really know like, "Why did this company fail? If I use this material to build my thing, what are the trade-offs associated with that? If I had to manufacture this literally myself, with my own hands, how would I do it?"</p><p>I think that expertise is really valuable. Then I think the other thing that's really important-- I think this is probably also important in software, which is just iteration speed, again. How important does the founder see speed and the the cycle of having at least something to test against the real world very quickly and then to iterate and fix what went wrong. I think that's really important as well.</p><p><strong>[00:41:58] Dan: </strong>Got it. You commented on this, but this was going around Twitter and the internet a lot. Jensen Huang said in an interview that he would have never started Nvidia if he knew how hard it would be. You commented on a post basically saying you think this sentiment is actually a pattern in hard tech that a lot of these founders end up saying like, "I don't know if I would have done it if I knew how hard it would be." The implication that you draw here is that maybe a little naivete is actually good.</p><p>Is that just a fact of life that this stuff is painfully hard? To me, that would be bad news because how much naivete can we have out there that feels like it would be a bottleneck to get people to go into this stuff. I'm curious how you just think about that.</p><p><strong>[00:42:37] Eli: </strong>I think it is basically a fact of life. Maybe it's bad news. I think the good news is there's a lot of naivete out there, particularly with young people. Yes, it is way harder to build a company than you think, even if you adjust for that fact. Even if everybody tells you like, "It's going to be way harder than you think," it's still going to be harder than you think. I contrast this a little bit with Google's lab X. They have this moonshot factory. It's led by Astro Teller, who's a super smart guy. The whole MO of that lab is like, "We're going to study this potential product, and we're going to cut bait on the ones that seem too technically hard."</p><p>I think the result of that is that you're cutting bait on everything. Whereas if all of those were a founder-led startup that has to go through a year of hell to get the product on the market, but they want to give up, but they can't. There's some things that they probably cut bait on that would have worked if they were instead had been done by a founder who had funding, but when it got hard, he just had no choice but to keep going. Yes. You have to do things. We do these things not because they're easy, but because we thought they would be easy.</p><p>I see that over and over again in hard tech startups is it's always just harder than you think. Then I think the other thing is you have founders who are very technical, and they have to learn the business side of it. They have to learn how to fundraise. It can be excruciating. you do 300 pitches or whatever, and you still don't raise your brand. That's like, "Guess what? You have to keep going." It is hard.</p><p><strong>[00:44:30] Dan: </strong>You've got founders, you've got corporate labs. What about the government? When should they take on a hard tech project? Obviously, at some point in time, they take us to the moon, but in this day and age, what role do you think they have to play?</p><p><strong>[00:44:41] Eli: </strong>It's a good question. The way that they can be productively involved is in funding first-of-a-kind projects. Thinking about geothermal. There's, I don't know, what you want to say, four different modalities that you could use that I would consider next geothermal energy production. Whoever the first company is to show up to want to do a discreetly new well. If they would say, "We will find you to do this project," I think that that would be really valuable. Then they do the project, hopefully, it succeeds. Then investors see that it succeeds and the private capital is there to go take it to the next step.</p><p>There is an office of clean energy demonstration that's supposed to do this. They don't have any geothermal money, interestingly. They got funding for a lot of other things but not geothermal. Then there's a loan programs office. They don't quite do this. They take the next step. They don't want to take too much technical risk, but they are willing to be-- if the technical risk is burned down, but private markets still don't see that, then they step in. I think that's one thing. I'm thinking about, say, nuclear. We have a national nuclear reactor laboratory. It's called Idaho National Labs. Now, they have not actually built a new reactor since the 1960s.</p><p>That has come to market, building test reactors would be great. They are building one right now, but they haven't done it for over 50 years. Our nation's nuclear reactor laboratory going for over 50 years between test reactors. Then I think something like partnerships with industry. They have, at INL, an exemption from NRC rules. They have a lot of facilities that are out in the desert. You can imagine a regime where they basically say to all US nuclear startups like, "You can come here, use our facilities, build your reactor in six months. You can test it destructively. You can blow it up out in the middle of the desert where no one will get hurt. Then you can iterate."</p><p>Iteration is so important. That's something that's not done in the nuclear industry. It's basically 12 years and $12 billion between reactors. Providing that there's facilities in safe harbor and regulatory exemption would be really valuable in nuclear. I'm less enamored of just production subsidies. If you think about the IRA, one of the things it does is it puts these huge subsidies on clean energy generation. That, to me, feels more like productionism. There's this constant problem of an industrial policy if you coddle an industry too much, it just gets fat and happy. US shipbuilding is a perfect example. We have as many shipbuilding workers as pretty similar to numbers to Korea and Japan.</p><p>We produce way fewer ships than they do. The productionism we've done there has not worked well. I think something similar could happen in energy generation. If you don't actually face market pressures because your subsidies are so generous, you might stop innovating a little bit. To answer your question, I think governments can be helpful, but it's hard, and you have to think through carefully, whether you're helping, or you get in a lot of bang for your buck or at least not being harmful in the support that you're giving to an industry. I spent a couple of years at Boom working on policy and so on, and had a great relationship with NASA people.</p><p>I remember one time when we had to figure out how to learn to use some prediction software that was NASA software. Our engineers tried to learn this, couldn't do it. We hired an outside consultant to teach them. The consultant was basically learning it as they went and then trying to teach our team. It was like, "That didn't work." I called up some contacts at NASA, and they were like, "Yes, just fly out, bring us a team, and we'll walk you through it." They did, and it was great. It was just an amazing resource for the industry to have these NASA experts who are accessible and willing to talk to engineers in the private sector.</p><p><strong>[00:49:23] Dan: </strong>Yes, interesting. I was just looking this up in prep for this call, but it looks like the number of people that graduated with a computer science degree in the US, it increased from about 40,000 in 2010 to over 100,000 today. It's still growing really, really rapidly year over year. Do you think we have too many people going into software?</p><p><strong>[00:49:40] Eli: </strong>Probably. What margins and what constraints are we talking about. If you believe the physical world is over-regulated, and it's hard to get things done in the physical world, though that increases the rate of return in the software world where you have the First Amendment and where that means that limits any preemptive publishing of software or something like that. The returns of software are probably higher than they should be, relative to the returns in other sectors, if you believe that those other sectors are over-regulated in a way that depresses rate of return.</p><p>Now, I don't know, the computer science education, my understanding is a lot of that's super theoretical and not actually very useful to industry. I think we're probably almost certainly no matter what over producing people who understand the Church-Turing thesis, so these more abstract and less relevant points of computer science.</p><p><strong>[00:50:44] Dan: </strong>What about the corollary to this, which is, do you think we have enough top talent doing biotech, energy, transportation, these more physical world industries?</p><p><strong>[00:50:52] Eli: </strong>I think it certainly it would be better if the rate of return in those industries was higher, we would get more talent flowing to them.</p><p><strong>[00:51:01] Dan: </strong>I guess this is the point. Is your view that rate of return in the industry is what causes talent to flow to it?</p><p><strong>[00:51:06] Eli: </strong>I think so. I think it was at least part of it. Yes. There's more opportunity. If there was a higher returns in biotech because, say, FDA adopts some reform that's more permissive, also still smart and good for the industry and maintains trust in those products and so on, then, yes, I think there'd be a gold rush right in biotech, and more people would flow to that industry. Making sure that you're not artificially depressing the rate of return in these other areas, I think is how you should think about managing talent flows. It's not about not managing the talent flow directly, but making sure that the rate of return is sufficiently high in those sectors.</p><p><strong>[00:51:59] Dan: </strong>Got it. We've alluded, too, in a lot of these questions, this idea that regulation broadly defined hinders productivity growth. I'm curious if we have a new Elon Musk, let's say, and he's 12 right now. We say he's going to go forth and lead the country in some way. At the margin, do we need him more to be a policy expert and go and adjust policy and get into politics? Do we need him to just say, "Screw it. I'll work within the existing balance of how everything works. I'll just do three more really good companies." Where do we need that incremental Elon Musk?</p><p><strong>[00:52:31] Eli: </strong>Yes, I would say do the thing. I would say the policy stuff is important. Obviously, I'm working on the policy stuff, so I think it's important. I also believe that you could even actually be more effective in policy change a lot of times, if you have an example or a thing that you're trying to push through. When I was at the Mercatus Center, I wrote a paper on supersonics. That paper circulated within DOT, got some interest and stuff like that. We were able to move the needle on supersonic policy more. I felt like I could see myself being able to move the needle more once I got into the industry and was actually like, "We want to build this thing, and you need to change policy so that we can build this thing."</p><p>You can create jobs and whatever else, but there's economic activity associated with it that the regulator doesn't want to stand in the way of. Creating an intellectual foundation is important, but at the margin, I would say, it's more people pushing on the boundaries from within industry and trying to do things that have regulatory risk. I think a lot of times, if what you're doing is actually a singular thing, regulatory risk is sometimes lower than people think. I think regulators do want to want to accommodate. When I say singular, I mean cryptocurrency, everyone's doing cryptocurrency stuff.</p><p>If you are just, "I'm a crypto person and I want to engage the SEC and change the rules," that's going to be tough because they have to take everybody into account. If you're actually have a unique thing that you're trying to do-- I think regulators tend to be open to engaging and figuring out a solution for you, particularly if you're also investing in lobbying, and they're getting letters from members of Congress saying like, "What's going on here? Why are you stopping this one entrepreneur from my district?" I think there's paths forward if you're one of these singular entrepreneurs.</p><p><strong>[00:54:34] Dan: </strong>Got it. You've been talking about NEPA for many years. I think you're actually the person where I first heard about it, and now it's everywhere. How close are we? Is there any realistic world where in the next couple of years it gets either repealed or reformed to the extent that it's not a serious hindrance to growth? What is your confidence on that?</p><p><strong>[00:54:53] Eli: </strong>I think there's interesting discussions going on in the hill now and an awareness of it. I don't think that the solutions that are being discussed are the go far enough. I think a possible trajectory here is something passes, it's still not enough, and we keep beating our heads about it. Then I think, ultimately, the driving force for reform here is actually the climate movement. That's not going away. If we reformed NEPA and it's still really hard to deploy clean energy, there's going to be more interest in even two years from now. Say, we did something now, it wasn't enough, two years from now, the industry is going to be like, "Hey, this didn't work. We need more reform."</p><p>I'm actually super excited for the climate movement because they're the one mainstream movement that's not complacent. They actually want and need a solution and will push hard for it and have a very strong conviction about it. I think there's just basically no way we can deploy clean energy on the scale that we need it or the climate movement believes that we need it. I think we could use actually more than even they think they need, but that quantity of energy is not getting deployed in any reasonable timeframe without bigger NEPA reform.</p><p>That would be my point of optimism is that whatever we do that doesn't work, I don't think it's a one and done, we do a reform, and then that sucks up all the attention and everyone says like, "We've done NEPA reform, and now we can move on." I think it will be the case that we will continue to enact reforms until something actually works to deploy energy because the climate movement thinks it's so important.</p><p><strong>[00:56:57] Dan: </strong>There's a view out there that AI is the last tech breakthrough that we would technically ever need because the theory goes you create AGI, it can solve all of your other problems for you. You just ask it like, "Hey, can you build a spaceship that gets us to Mars? Can you cure cancer? Can you do anything that we care about?" What do you make of this?</p><p><strong>[00:57:17] Eli: </strong>I am learning along with everybody else. On one hand it's very cool tools. I'm super thrilled to have these tools available. On the other hand, I see many problems with a just-- It was a super optimistic or pessimistic approach, regardless of where you come down. I think the most likely thing is not that much changes, sadly. Part of this is pattern matching from past experience. I'm old enough to remember the '90s, the web. Everyone's like, "Oh, this is going to change everything." On one hand, it did. It changed a lot of social stuff. It didn't really fundamentally remake the economy in the way that we thought it would.</p><p>The example I like to give is if I want to sell my house, I'm probably still paying 6% to a realtor, which was the biggest no-brainer thing that I thought the web would fix. It's very easy to list the house and disrupt the real estate cartel. It turned out, no, it didn't do that. I think there's going to be a lot of domains where AI is powerful and interesting and could theoretically do something, but it's going to be a long time in adoption. Health sector, you could replace doctors for a lot of things with a medical chatbot or even something interpreting test data or whatever. They're still going to want human doctor eyes on it, I think, for a long time.</p><p>That wasn't your question, though. I think your question was, is it going to be capable of doing a lot of these tasks? I think the tools are pretty good, but they have some error rate. If you think about any interesting question is actually a long series of smaller, more isolated tasks. If you have a 1% error rate on each step, and it's a 50-step problem, it's not going to be a reliable solution to the task. Maybe there'll be something self-correcting or something like that. I don't know. My inclination, just looking at the web-- What else is going to save us? Crypto was going to save us, smartphones, we're going to be revolutionary.</p><p>IoT. I remember internet of things and smart cities. We're going to revolutionize the economy. I'm not saying AI is like that, but I've gotten to be more skeptical that I want to see it. I want to see the change in my hopes. Because I've hoped for fast growth for a long time, and I've just been disappointed so many times that I don't see that. Then I think the other thing is I'm not sure people's mental models around AI are correct. If we think about even super intelligent AI. I don't even know what this means, but let's say that there's some entity that's much smarter than us.</p><p>Is it uniquely capable in a real world way? I think about, "What are the most competent and capable entities on our planet?" It's corporations. Corporations, they run the world. Corporations run the world. Are corporations smart? If you think about it from an information processing standpoint, they are very slow institutions. Input filters into the corporation, and then it gets passed around for weeks or months before a decision is made. In a sense, every single corporation on the planet is dumber than you and me, and yet they run the world, and we don't. Anyway, in speed of information processing doesn't seem to correlate with actual capabilities in the real world. That's at least a lingering doubt in my mind that people have the right model here.</p><p><strong>[01:01:20] Dan: </strong>Got it. I want to talk about your most recent posts on Joseph Tainter's book, <em>The Collapse of Complex Societies</em>. His basic theory is that you increase societal complexity, and then at some point, you start to get diminishing returns to that complexity, which eventually, people get fed up, and then this can lead to collapse. He makes a claim that it has in many cases in the past. The question I had for you, just riffing on this, is it seems like our society is way more complex by many orders of magnitude than past societies. Why should we believe that we'll hit some point where it becomes too complex?</p><p><strong>[01:01:58] Eli: </strong>Yes. The reason we can be more complex is because we have a different technological dispensation. We have the technological tools to turn that complexity into additional output at a much higher rate than past societies did. One way to think about complexity is how much specialization do you have, say in the labor market, or how many administrators do you have? If you think about, I don't know, the Maya or Minoans or whatever, if they got to a point where the fraction of their economy was administrators that ours is, I think they would have gotten fed up way before we do.</p><p>We could tolerate a higher percentage of people being administrators, because to some extent, that administration does add value in our economy in a way that it wouldn't in an agricultural society. It's not just how complex are you in absolute terms, but how complex are you relative to the technological dispensation? In particular, what is the marginal return to that complexity? The answer is going to be different depending on how much technology you have available.</p><p><strong>[01:03:20] Dan: </strong>Oh, I see. You'd be more complex, but you're getting more returns.</p><p><strong>[01:03:24] Eli: </strong>Yes. As long as you're getting positive returns at the margin to each additional unit of complexity, everything's fine.</p><p><strong>[01:03:30] Dan: </strong>Yes. Until it starts to slow down.</p><p><strong>[01:03:33] Eli: </strong>Until it's not, yes. Until you're starting to actually lose output from the additional complexity.</p><p><strong>[01:03:40] Dan: </strong>How much do you worry about this? Do you worry about Tainter's idea coming true for the United States or leading Western nations today?</p><p><strong>[01:03:46] Eli: </strong>I'm not, by nature, a worrier. I think I would say not terribly worried. On the other hand, do I think, "Is this a dynamic that seems to be going on right now?" I would say yes. One of the things that Tainter identifies is that it manifests in apathy to the wellbeing of the polity. People don't care anymore about their society functioning or their society continuing to exist or doing well. It's a decline in-- maybe patriotism is a related word here, but not caring about their government, or they're not identifying with it and seeing it as somewhat oppositional. I think we have that in spades. A, we have a lot of complexity. B, we have a lot of apathy or even hostility to the current system.</p><p>In so far as you have people repeating Russian talking points about foreign policy and stuff like that. If you think about the Roman peasants got so fed up that when the barbarians came in, they were like, "Come on in. We don't care." That seems to be a thing that at least, I don't know, if you want to say 10% of the population or something that, we're there. I do think it's relevant. I think the cure is actually higher returns. More growth seems to be basically the cure to disease that Tainter is talking about. Both in the sense that exogenous growth that we just got more growth like, "We need to have higher returns, and that would be great."</p><p>I think the other thing is to get more growth, we actually do need to, in a concerted way, try to reduce complexity and root out the negative value forms of complexity, like the bureaucratization and the over-administration and the over-regulation of the economy. If you did that, that would also generate more growth. Different than exogenous growth, but just actually addressing the problem that Tainter identifies head on and trying to cultivate our garden of complexity, let's say, weed out the bad parts of it. I think that would be really beneficial.</p><p><strong>[01:06:17] Dan: </strong>It seems like we've been really good. You mentioned you have two options. Continue to increase growth and abundance and then decrease complexity. It seems like we've been really good at just outrunning and finding growth in little pockets over the last hundred years. Even if it's been slowing down, we're still eking it out relative to how bad things could be. It doesn't really seem like we've done anything on reducing complexity. Do you think this is a problem that we're capable of solving? Do you see any indications or areas of the economy where we've been able to reduce complexity?</p><p><strong>[01:06:49] Eli: </strong>Yes, I don't know. I worry about this. I think coming out of, I know, George Mason and doing an econ PhD there, it's like the home of public choice economics and a lot of public choice analysis. On one hand, it's fully positive. It's not normative analysis. It's just thinking about how these actors interact or whatever. A lot of it also is if you want to translate it to a more normative framework, a lot of the analysis is very low agency, and it's just like, "This is the way it is." My view is we don't have agency if we approach it with that attitude, if you just assume that's the way it is, that's a self-fulfilling prophecy.</p><p>If you can approach it with higher agency, these policies and stuff, people do make and influence policies all the time. Why not us? Why wouldn't we be the people to do that? I think we can make progress on it. I do worry that we are fighting an uphill battle in the sense that it's much easier to add complexity than it is to systematically reduce it.</p><p><strong>[01:08:08] Dan: </strong>How do you stack rank? Just call it internal collapse from the Tainter model against the more commonly talked about existential risks nuclear war or AI kills us all or something. Which one do you think we should be more concerned with?</p><p><strong>[01:08:23] Eli: </strong>They are different. Even the collapsed societies, it's not everybody dies. If you're approaching it from a, Will MacAskill EA perspective, it sounds he's actually concerned about literally everybody dying and all the future value of getting wiped out. I don't think the Tainter collapse model would support that being as important of a risk. I'm much more concerned about the Tainter style collapse in part because, A, it's happened a couple dozen times already in history. This is a thing that we know happens. I think the other thing is a lot of the existential risk stuff is it's iterated first principles reasoning, which I don't actually think is a very good method.</p><p>It could be a very rational person, but you take a couple of premises, you draw a conclusion, that becomes your new premise, then you do it over and over again. It doesn't take much error anywhere in that chain to make your ultimate conclusion completely fallacious. I just don't actually think that's a very good method. Asteroid risk is a thing, that has also happened. There is empirical data on that. We should take that seriously, maybe. The dinosaurs didn't have a space program, which is bad for them. Yes, 20 something times in the last 5,000 years, we've had this complexity cycle. That seems worth a few thoughts we're thinking about.</p><p><strong>[01:09:55] Dan: </strong>Final question here, let's say, a young high schooler, they stumble upon your blog and they say, "I want to help get total factor productivity to 2%. This is my goal. You've inspired me." What do you tell them to do with their career?</p><p><strong>[01:10:08] Eli: </strong>I would say like, "A, be curious." I would say like, "Actually don't look at it as a career. I would look at it more as a series of jobs," which sound the same thing, but it's actually different. I don't think you're optimizing the next 40 years of your impact. I think that's just an impossible task. I would think of it in more five-year chunks. Think about like, "What do you want to do in the next five years that's going to set you up to add value?" I think a big piece of it is, "Find what you think is the most interesting thing going on right now. Given your values and your interest in growth, I'm assuming that's going to be a growth-producing activity.</p><p>Find out what you think is most interesting, and find a way to put yourself at the center of that activity. Then just get there and just indiscriminately add value. Don't worry about being compensated for it. Just add value to everybody around you. That's how you advance in your field. People will want you around if you're just indiscriminately producing value because they want to be around people who indiscriminately produce value. Yes, I don't think there's a master plan, but follow that interest and follow that passion, and then just add value."</p><p>Then, "Over a course of like five years, what would you find most interesting in the world might change, and then just do it again. You'll have a series of big contributions that probably take you somewhere interesting but not a planned career."</p><p><strong>[01:11:52] Dan: </strong>Eli, thank you so much for your time today.</p><p><strong>[01:11:54] Eli: </strong>My pleasure. It was great questions.</p>]]></content:encoded></item><item><title><![CDATA[Sebastian Mallaby]]></title><description><![CDATA[VC, hedge funds, the Fed, and theories of history]]></description><link>https://www.danschulz.co/p/sebastian-mallaby</link><guid isPermaLink="false">https://www.danschulz.co/p/sebastian-mallaby</guid><dc:creator><![CDATA[Dan]]></dc:creator><pubDate>Thu, 18 Apr 2024 10:04:37 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/143701707/182166df68884ec446860aa41a904062.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Sebastian Mallaby is the&nbsp;Paul A. Volcker senior fellow for international economics&nbsp;at the Council on Foreign Relations (CFR). Mallaby contributes to a variety of publications, including&nbsp;<em>Foreign Affairs,&nbsp;</em>the<em>&nbsp;Atlantic,&nbsp;</em>the&nbsp;<em>Washington Post,&nbsp;</em>and the&nbsp;<em>Financial Times</em>, where he spent two years as a contributing editor. He&#8217;s the author of many of my favorite business books of all time including <a href="https://www.amazon.com/Power-Law-Venture-Capital-Making/dp/052555999X&#8288;">&#8288;The Power Law&#8288;</a> on the history of Venture Capital, <a href="https://www.amazon.com/More-Money-Than-God-Sebastian/dp/B00Y4QWJKU/&#8288;">&#8288;More Money Than God&#8288;</a> on the history of Hedge Funds, and <a href="https://www.amazon.com/Man-Who-Knew-Times-Greenspan-ebook/dp/B01CDVCAXS&#8288;">&#8288;The Man Who Knew: The Life and Times of Alan Greenspan&#8288;</a>, which is one of my favorite biographies ever in any category.</p><div id="youtube2--6Y7lGwGnpw" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;-6Y7lGwGnpw&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/-6Y7lGwGnpw?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8acc120957532477d9b390a37a&quot;,&quot;title&quot;:&quot;Sebastian Mallaby&quot;,&quot;subtitle&quot;:&quot;Dan Schulz&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/5T3IZ2cF4ocYlYyfSENyoR&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/5T3IZ2cF4ocYlYyfSENyoR" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><h3>Timestamps</h3><p>(0:00:00) Intro</p><p>(0:00:57) Business journalism</p><p>(0:04:11) Journalists as investors</p><p>(0:06:24) Most misunderstood part of the economy</p><p>(0:07:34) EMH</p><p>(0:11:49) Private v public markets</p><p>(0:17:27) Liquidity premium</p><p>(0:20:13) New asset management models</p><p>(0:26:12) Investor specialization</p><p>(0:37:42) Firm culture vs outlier talent</p><p>(0:41:00) Sequoia and FTX</p><p>(0:44:40) Meme stocks</p><p>(0:45:56) VC vs hedge funds</p><p>(0:50:38) Decision making at the Fed</p><p>(0:57:22) Evolution of the Fed</p><p>(1:01:34) Powell</p><p>(1:05:39) The Fed&#8217;s political independence</p><p>(1:07:13) History, Marx, Carlyle</p><h3>Links</h3><ul><li><p><a href="https://twitter.com/scmallaby">&#8288;Follow Sebastian on X&#8288;</a></p></li><li><p><a href="https://www.cfr.org/expert/sebastian-mallaby">&#8288;Sebastian's profile for the Council on Foreign Relations&#8288;</a></p></li><li><p><a href="https://twitter.com/dnschlz">&#8288;&#8288;&#8288;&#8288;&#8288;Follow Dan on X&#8288;&#8288;&#8288;&#8288;&#8288;</a></p></li><li><p>Subscribe to Undertone on&nbsp;<a href="https://www.youtube.com/channel/UCtRjQ8EA3Via7KXzv9F173Q">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;YouTube&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>,&nbsp;<a href="https://podcasts.apple.com/us/podcast/the-world-of-yesterday/id1693303954">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Apple&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>,&nbsp;<a href="https://open.spotify.com/show/59YkrYwjAgiKAVMNGWPaLE">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Spotify&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a>, or&nbsp;<a href="https://www.danschulz.co/">&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;Substack&#8288;&#8288;&#8288;&#8288;&#8288;&#8288;</a></p></li><li><p>Watch or listen to previous episodes of Undertone with <a href="https://www.danschulz.co/p/tyler-cowen">&#8288;&#8288;&#8288;&#8288;Tyler Cowen&#8288;&#8288;&#8288;&#8288;</a>, <a href="https://www.danschulz.co/p/vitalik-buterin">&#8288;&#8288;&#8288;&#8288;Vitalik Buterin&#8288;&#8288;&#8288;&#8288;</a>, <a href="https://www.danschulz.co/p/scott-sumner">&#8288;&#8288;&#8288;&#8288;Scott Sumner&#8288;&#8288;&#8288;&#8288;</a>, <a href="https://www.danschulz.co/p/samo-burja">&#8288;&#8288;&#8288;&#8288;Samo Burja&#8288;&#8288;&#8288;&#8288;</a>, <a href="https://www.danschulz.co/p/3-steve-hsu">&#8288;&#8288;&#8288;&#8288;Steve Hsu&#8288;&#8288;&#8288;&#8288;</a>, and <a href="https://www.danschulz.co/">&#8288;&#8288;&#8288;&#8288;more&#8288;&#8288;&#8288;&#8288;</a></p></li><li><p>I love hearing from listeners. Email me any time at dan@danschulz.co</p></li><li><p>Share anonymous feedback on the podcast: <a href="https://forms.gle/w7LXMCXhdJ5Q2eB9A">&#8288;&#8288;&#8288;&#8288;&#8288;https://forms.gle/w7LXMCXhdJ5Q2eB9A</a></p></li></ul><h3>Transcript</h3><p><strong>[00:00:11] Dan: </strong>Welcome. This is a conversation with Sebastian Mallaby, senior fellow in international economics at the Council on Foreign Relations. He's authored many of my favorite business books of all time, including <em>The Power Law</em> on the history of venture capital, <em>More Money Than God</em> on the history of hedge funds, and <em>The Man Who Knew: The Life and Times of Alan Greenspan</em>, which is one of my favorite biographies in any category ever.</p><p>What makes Sebastian's books and insights so unique is you can tell he's spent a ton of time with the biggest names that impact the global economy. You get the sense that he really understands what makes them tick. Insiders in the industries he writes about comment on the accuracy of his books, and it really comes out in this conversation. He's also a student of history, and there's a fun bit at the end where we get into Marxism and the great man theory of history. I hope you enjoy it.</p><p>All right. I'm here today with Sebastian Mallaby. Sebastian, welcome.</p><p><strong>[00:00:56] Sebastian Mallaby: </strong>Great to be with you.</p><p><strong>[00:00:57] Dan: </strong>Okay. First question. When the book <em>The Power Law</em> came out, I was really struck by how many VCs took to Twitter or another public forum to talk about-- The response I generally saw was something to the effect of, "Yes, if you want to read a book on venture capital, this is how it is." What do you think that you understand about business writing that others might miss where your writing gets praised actually by the practitioners?</p><p><strong>[00:01:21] Sebastian: </strong>It's partly patience. I do spend five years or so on these projects, and so I take the time to really go native, get access as much as much as possible. If it requires me to accept that somebody doesn't want to see me, but I can circle back in a year, that's what I do. I end up having really spent quality time, two-hour interviews, recorded, where there's really time to go into a lot of depth with folks.</p><p>I also built an environment of trust. Most of the time I'm approaching somebody because I've gotten to know somebody else who knows them, and I'm being passed along, so it's a warm intro. Unlike most writers, I'm not worried about sharing pages I've written. What I say is, "Look, this is an independent project. I don't promise to change a single comma, but I do promise to listen to what you have to say. If you've got comments, tell me."</p><p>In my view, that makes the book better. It means that if there's an error, which is a real error, then I will correct it, and if it's a matter of judgment, I will make a judgment. In my view, that serves the reader, but of course, it also serves the source because they feel like I'm not a cowboy who's going to completely introduce what they're telling me, therefore, they talk more.</p><p>The frequent outcome of that sort of process is that I send people some pages. They say, "Listen, it's not wrong, but we feel, or I feel, that there was some angle you under-emphasized. Why don't we have another conversation, and I'll explain why?" So then I get another hour and a half of time with someone. That always deepens my understanding, and it makes for a better result. Now, it is time-consuming. You have to be willing to write a first draft, a second draft, a third draft, et cetera until you really feel that you've gotten it right, but I do feel it's the best way to proceed.</p><p><strong>[00:03:23] Dan: </strong>Say, a talented young person said, "I want to be the next Sebastian Mallaby," is there any advice that you would give them specific to journalism?</p><p><strong>[00:03:31] Sebastian: </strong>I guess it's probably the case that you have to get to these projects after you've had a bit of journalism under your belt. In my case, I'd been a full-time journalist at <em>The Economist</em> magazine, and then at the <em>Washington Post</em>, writing editorials and opinion columns before my book-writing career really began to take off. I think that was important because you build a foundation of knowledge, you build credibility, you get some name recognition, and that enables you to go out and get the good access that you need in order to have a shot at writing a good book.</p><p><strong>[00:04:09] Dan: </strong>It's striking to me that both Alfred Winslow Jones, who you credit as starting the very first hedge fund, as well as Mike Moritz, who, of course, started Sequoia and was one of the very early stars of the industry, they both began their careers as journalists. What do you think is useful about journalism in a career as an investor?</p><p><strong>[00:04:26] Sebastian: </strong>Actually, that's a great parallel, which I hadn't quite thought of, but you're completely right that Alfred Winslow Jones was a kind of a journalist. He wrote long essays in what, at the time, I think, were the more heavyweight versions of, I guess, <em>Fortune,</em> maybe <em>Forbes</em>. I think <em>Fortune.</em> He was on the borderline between a bit of journalism and academia. He had a PhD in sociology.</p><p>Michael Moritz was more a classic journalist, wrote for <em>Time</em> magazine, and so, because he was doing this work much later than Alfred Winslow Jones, it was more of a journalist in the recognizable modern sense, Michael Moritz. You're right that they both had that formation. I guess it's a natural thing for someone to do if they are very curious about the world and they want to go talk to people and discover stuff, and that's what investors do as well. I think that's the reason.</p><p><strong>[00:05:21] Dan: </strong>Back when you were starting your career, did you have any business journalists that you looked up to or tried to emulate, or do you feel like you're trailblazing?</p><p><strong>[00:05:29] Sebastian: </strong>I think Roger Lowenstein was influential in my development. He wrote the famous book <em>When Genius Failed</em>, about the hedge fund long-term capital management. He also wrote a bunch of other books. I really like his long biography of Warren Buffett, for example. He focused on telling it like it is, not rushing to judgment to condemn a business operator as a greedy capitalist, but more trying to understand what the intellectual process has been that went into investment decisions that then made a lot of money.</p><p>It was less about the kind of what you might call wealth porn and more about the geeky intellectual understanding. I related to that, and I guess that's what I've tried to do as well.</p><p><strong>[00:06:21] Dan: </strong>To that end, what do you think is the most misunderstood part of the global economy by the general public?</p><p><strong>[00:06:28] Sebastian: </strong>I would say that finance is right up there in the sense that it's routinely condemned in a, I think, rather knee-jerk fashion. The reality is that if we don't have central planning, which over history hasn't worked terribly well, you need decentralized planning. That means you need financiers to allocate capital intelligently. We have limited amounts of capital to allocate. There are also limited amounts of workers in the economy, capital goods in the economy. If you give somebody a bunch of capital, they're going to go off and hire some people and buy some capital goods and use up the non-financial resources as well.</p><p>If you want growth, if you want productivity, you have to give these resources to the smarter operators who are going to do something that's actually useful, useful being defined as producing goods that people will buy because they want them, they have utility. I think decentralized planning is at the center of what makes economies good. That is largely carried out by investors.</p><p><strong>[00:07:31] Dan: </strong>It seems like people have very different views on the efficient market hypothesis. It's super debated, right? In academia, they'll tell you, "Hey, it's mostly true with a couple of nuances." In business, sometimes they'll say it's mostly not true, with a couple of nuances. Just from your perspective, what's the best way to understand it?</p><p><strong>[00:07:49] Sebastian: </strong>I think, first, one should make a distinction and point out that the efficient market hypothesis I think was only ever intended to apply to public liquid markets. When you're talking about venture capital or private investments, it doesn't even apply there because arbitrage is the central idea that makes markets fairly efficient. In other words, you can look on your screen and see two bonds or two stocks and figure out whether one of them is mispriced relative to the other. If it is, you sell the overpriced one, buy the underpriced one. That trading is going to realign the prices to what is a sort of rational equilibrium.</p><p>You just can't do that with public companies whose price you don't get a quote on your screen and you can't go in and out of them quickly. Large amounts of modern capitalism have actually gravitated towards private markets, right? Private equity hardly existed in 1980. Now it's a huge chunk of corporate wealth. Public markets, on the other hand, have seen fewer listings over the last 20 years or so than they did before. It used to be typical if you had Amazon to-- If you were doing really well as a startup, you raise one round of capital and that takes you to the IPO and you IPO at a valuation of around $400, $500 million. That was the typical thing in the late '90s.</p><p>Obviously today there are lots of unicorns which have blasted past that half-a-billion mark and they show no sign of going public. Capitalism has gravitated towards private markets, which are by definition not efficient in the efficient market hypothesis sense. Now, in the public markets, you're right, there's a debate. I think the debate is in the end people who essentially agree but exaggerate the small amounts of disagreement. Efficient market hypothesis very few people believe it's perfectly efficient. If it is efficient, it's because of arbitrage that has happened by traders, and that arbitrage is profitable in the theory.</p><p>Even the hardcore EMH theory accepts that there are profitable arbitrage players in the market otherwise it wouldn't be efficient. I think, what one is talking about really is that, by the time you have a liquid public market and smart investors who have paid a lot of money to get the trades right have looked at all these public instruments and figured out whether they think they are overvalued or undervalued, you then have a price-setting competition between the buyers and the sellers that creates an equilibrium. If you are going to second-guess that fairly efficient price, you have to have some really deep specialized information to have an edge.</p><p>That edge, that development of deep specialized insights is what hedge funds exist for, right? They are highly, highly compensated, right? 20% of the upside if they get it right to try to do deeper research and come up with new ways of going about pricing public instruments. Therefore, they work incredibly hard at it. When they get it right, they make an absolute fortune. Again, going back to your earlier question, is this a social injustice? I think you have to just consider before you condemn it that one group in Connecticut could correct a pricing inefficiency that's global and that affects a trillion dollars in market capitalization.</p><p>I think it's more debatable than others do. Whether or not this is good for society, I would be inclined to be more sympathetic than critics would.</p><p><strong>[00:11:46] Dan: </strong>If private markets are highly inefficient because there isn't liquidity, why don't we see more capital just pouring into private markets and more trading ending up happening between these in order to make it efficient if there's, in theory, so much return to be had there?</p><p><strong>[00:12:03] Sebastian: </strong>It's a great question. Let's just start from a basic idea of efficiency in capital market intermediation. In a market economy, we have users of capital, typically a company that wants to go do something with the capital, and we have providers of capital, so that would be a pension fund or some other savings institution. Efficient intermediation would be when the financiers who match up the savers with the users of savings don't take too much of a big slice. You'd expect that over time there would be competition between systems intermediation, and the ones that are cheap and efficient would win, right?</p><p>Now, if you compare a public stock market with a private equity setup, the public stock market is much, much, much cheaper as an intermediator, right? If I'm a pension fund manager, I can go buy shares of blue chip companies in the stock market and the fee I pay for doing this is zero. I can hold those stocks for nothing, pretty much. You might pay a little bit for custodian stuff, but essentially it rounds to zero.</p><p>Whereas if I, on the other hand, want to access, I'm the pension fund again, I want to access private companies, I'm going to pay a private equity manager to go do this for me, and I'm going to pay them 2% of the capital I give them plus 20% of the profits that they make. Extremely expensive. Why on earth would capitalism gravitate from the cheap form of intermediation, public markets, to the expensive form, private equity? I've basically just rephrased your question thus far, right? The answer has to be that there is some incentive for both the savers of capital and the users of capital to go with that shift to private intermediation.</p><p>Now, I'll give you an example of a toxic incentive that is not good for the way capitalism functions, and I'll give you an example of a benign incentive, which probably is good. The toxic reason to shift over to private equity would be that it's not mark-to-market. Pension funds feel comforted if they give money to the private equity manager who locks it away for 10 years and doesn't keep giving them updates about whether it's up, down, or sideways, and particularly, therefore, they can't get a report one day that says, "Oops, there's been a market crash and you're suddenly down 25%," at which point you the pension fund manager might be fired for having given money to a bad private equity group, right?</p><p>Caution, job security concerns, a desire not to be measured is part of I think what drives pension funds to be happy to pay these very high fees to private equity managers. That's the toxic part. Just as an aside, I think this is a broad phenomenon in modern capitalism that people hate to be measured and they will accept all kinds of inefficiencies and stupidities in order to avoid being measured. We can come back to that if you like.</p><p>Now, the benign example, the benign theory of why we see this shift to apparently inefficient private equity is something different. It is that there are huge agency problems in stock market capitalism. You've got the shareholders, they don't really exercise that much influence over the board, the board, in turn, doesn't exercise that much influence over what the chief executive does, and so you get chief executives who can be not terribly good, the evaluation methods aren't very effective, and you've got these principal-agent splits that are super inefficient.</p><p>It's worth paying that 2% and 20% to private equity to get a private equity group that comes in, buys the whole company, and collapses the principal-agent splits. Now you have the owner, the private equity company, directly appointing the CEO and doing the board-like oversight, and they have 100% skin in the game, and they are damn well going to make sure it works properly, and so they're going to drive efficiencies into the company.</p><p>I think you can think of private equity, if you're being so sympathetic to it, over the last 20-30 years in the following way. You can say, look, management consultants are the people who prance around, coming up with supposedly new ideas about how you make companies efficient, but the people who really do proper management consulting with skin in the game, aligned incentives, which really drive them to do good management innovations, and by management I'm also thinking about optimizing your balance sheet, optimizing the data science you might use to do good marketing, it's a big range of things, but the groups that really do proper management consulting are actually the PE shops.</p><p>Where they're not marketing themselves and doing a PowerPoint projection, which they flipped over sideways to make them look fancy. They're not performing, they're actually doing. They're implementing new management ideas in the companies they own and making capitalism more efficient.</p><p><strong>[00:17:24] Dan: </strong>The idea here on the second piece is that there's actually work being done. You're paying for that work to be done to optimize efficiencies. On the first one, this is really interesting. I think Cliff Asness actually came out with a paper at one point, or a post or something online, where he-- Typically, there's what's called the liquidity premium that you pay on an asset that's highly liquid. He calls it the illiquidity premium, which is the ability for private equity or the illiquid assets to get marked up for exactly that reason, because people don't like being measured.</p><p>I'm curious how big of an effect you think that actually has. On balance, do you think that illiquid assets actually are charged more than liquid assets in some cases, specifically to obscure being mark-to-market at any given point in time?</p><p><strong>[00:18:10] Sebastian: </strong>I don't know how to directly measure that. I haven't seen anything that tells us how big that effect is. Anecdotally, I do hear that institutional money managers, it could be pensions, it could be insurance, it could be endowments, it could be family offices, that they do like the comfort of not having to wake up in the middle of the night worrying about where the markets are, which is irrational because they ought to be worried. It's their job to worry. As an allocator of capital, you should worry about the capital. It's lazy to lock it up for 10 years and say, "Oh, now I don't have to worry." That doesn't strike me as a good way to do your job.</p><p>It wouldn't be surprising if that is how people choose to do their jobs because-- Let me pick up the breadcrumb I dropped earlier. I've been looking for various reasons recently at the problems of getting artificial intelligence baked into healthcare. One of the problems is that any time you try to collect data on health and you have any data repository that can be used to train models, the suspicions around data collection in this sector are enormous.</p><p>You might think, "Well, it's because of privacy concerns and patients shouldn't have their data shared and all that stuff." Actually, what I'm told is quite a chunk of this aversion to using data in healthcare is that the healthcare providers don't want to be measured. They don't want there to be results saying, "Oh, your hospital uses the following protocol for this procedure." "Look, your survival rate for your patients is 25% lower than these other 7 hospitals that we measured," because that would suck if you're running the hospital and you're trying to get patients in the door. It's this desire to cover up competitively relevant data that I think is one of the big enemies of efficiency and equity in capitalism.</p><p><strong>[00:20:09] Dan: </strong>Oh, that's super interesting. A question on both hedge funds and venture capital. I guess in the grand scheme of American business are relatively modern ideas, especially to what we think of as the modern corporation. I guess, would you expect new models of asset management to emerge that have a similar level of impact to private equity, hedge funds, or venture capital?</p><p><strong>[00:20:30] Sebastian: </strong>I would. My theory of the case here is that as-- I'm a bit Marxian on this stuff.</p><p><strong>[00:20:38] Dan: </strong>Okay.</p><p><strong>[00:20:38] Sebastian: </strong>I think that what happens is technology changes. When you get technology changing, the superstructure has to change too. The superstructure includes financial mechanisms and corporate forms. If you just indulge me, if you don't mind, for a second in a little bit of history. Is that all right?</p><p><strong>[00:20:58] Dan: </strong>Please. Yes, let's do it. Yes.</p><p><strong>[00:21:00] Sebastian: </strong>All right. I'm going to start in 1840 or thereabouts, where the standard unit of the American economy was the one-man shop. I did say man, I'm afraid it was true in 1840. Probably a large language model these days would correct my language, but it was actually accurate to say man for 1840. All right. The reason is that in 1840, the Industrial Revolution hasn't happened and so there's no returns to scale. You don't have steam power to be able to make a big factory very efficient. You don't have the ability to transport what you produce on railroads around the country yet. There's no economies of scale. One-man shop is fine.</p><p>Then you have the Industrial Revolution and now it makes sense to have a big company, so they start to emerge. You need a way of financing the big company. First of all, the joint stock company is invented, where you issue shares and different investors can own pieces of them. You have limited liability as well to make it not too risky to own a chunk of a company when you're not actually managing it yourself. Somebody else might make a mistake. You don't want to be liable for that, so you need limited liability.</p><p>Then out of this comes the JP Morgan phase of trust-building because there's monopoly power that can be reaped if you have a JP Morgan-type of conglomerate building finance. Then that carries on for quite a long time in America. You've got the period of big corporations, big business, big government. That lasts roughly up to the 1970s when the Japanese start to be too competitive and we need a new way of doing things because otherwise, the Japanese will eat our lunch.</p><p>What we come up with is, oh, look, personal computers are just coming online and we can get rid of a bunch of paper-pushing middle managers in the corporation. To do that, we're going to have to have really, really aggressive CEOs who don't mind firing tons of people. Now we enter the period of hostile takeovers. We enter the period of private equity, KKR, Carlyle, these kinds of groups got started around 1980, give or take.</p><p>These private equity and takeover mechanisms, leveraged buyouts and so forth empower ruthless CEOs to be Neutron Jack, the Jack Welch story of the 1990s became the archetype of this thing, firing just reams and reams of middle management. These antiseptic words like de-layering, unbundling the corporation is how business gurus describe that. Then you come into 2000 or thereabouts, and much of that re-engineering of the American corporation has happened.</p><p>Now we've got a new thing going on, which is the internet, the arrival pretty soon afterwards of mobile and cloud, and the opportunities to create enormous amounts of wealth based on companies that exploit these new technologies. Now people realize that it's all about IP. Broadly defined, it's about intangible capital. There's a whole book, <em>Capitalism without Capital</em>, by two British authors, which talks about this. The point of <em>Capitalism without Capital</em> is we don't need so many capital goods anymore was their argument. It's all about business processes and intellectual property.</p><p>How do you create this intangible capital? It turns out that you have lots of parallel experiments going on in applied science where different startups try using the new platforms in new innovative ways. The way you finance that is with venture capital. That is the most effective way of creating wealth for some period between, let's say, 2005 and the COVID pandemic, roughly speaking. Venture capital had existed before, but it really took off and entered its prime for this technological reason that the internet, cloud, and mobile had created so many opportunities to do venture-style businesses that were going to create enormous amounts of wealth really fast.</p><p>My prediction is that as the substructure changes, to go back to Marx, the superstructure will change again. Maybe artificial intelligence is showing us a new phase already. You've got the hyperscalers dominating much of the action because AI turns out to be very capital-intensive in terms of both compute and in terms of the data you need and in terms of the high-priced human talent you need to bring in. Every single supposed startup, whether it's OpenAI partnering with Microsoft or Mistral or Anthropic or DeepMind back in the day being bought by Google, they've all basically formed alliances with behemoths. We have a new tech-based capital formation happening just in the last 18 months or so.</p><p><strong>[00:26:08] Dan: </strong>Thank you for that. The history of everything is super interesting. I had a question about just how focused these firms would be. It seemed like during COVID, you had firms like Sequoia branching out to different geographies. You had Y Combinator creating growth funds, branching out from their traditional early stage. You had hedge funds coming down and playing in early-stage venture rounds. Then recently both of these things have started to retract. I think Y Combinator recently got out of the growth equity business, D1 Capital and other hedge funds have been criticized really heavily for their venture returns weighing down the core hedge fund returns.</p><p>I'm curious if you think in this changing world, where's the equilibrium here? Do you think we'll go back to more specialization or will it be required that these firms expand their scope and try new things?</p><p><strong>[00:26:59] Sebastian: </strong>Good question. I think that there are some kinds of diversification, which I think are definitely a recipe for trouble. It's particularly not in the hedge fund or venture capital space. Just to address the broader point you're making, let's think about Lehman Brothers for a second, circa 2006 or 2005, when it was brewing the troubles that led to its bankruptcy in 2008. What was going wrong there? These guys hurled enormous amounts of toxic mortgage paper.</p><p>The reason I believe that they did this is that they couldn't decide if they were, A, maybe into the mortgage origination business, B, maybe into the mortgage securitization business. They had a lot of inventory of mortgage loans because they were going to package it up and securitize it and make money that way, or maybe they were doing prop trading and they actually wanted to hold these mortgages because it was a bet that they believed in.</p><p>Maybe there was some other theory that, at this point, I'm running out, but I think those investment banks were so diversified out of merely asset management into various kinds of transaction businesses and various kinds of relationship businesses that I think if you can't decide why you own the mortgage paper, you're not going to do a good job of deciding whether or not you should be owning it. I think that was the lesson of 2008, or one of them, as to why the broker-dealers, the investment banks proved to be such a disaster.</p><p>Now, you're asking a slightly more focused question, which is, within asset management-only companies, namely hedge funds, or venture capital, should you specialize? I do think that specialization is better. I'm less dogmatic or less hard on the point than I would be about Lehman Brothers because there are some synergies. We could frame this in the venture space as being the Benchmark versus Sequoia debate. I do write about this in my book <em>The Power Law</em> a bit.</p><p>The reason why Sequoia was determined to get into growth equity was that one day, I think, in 1994, Michael Moritz was sitting with Jerry Yang, the founder of Yahoo, who he had backed, in comes Masayoshi Son, less notorious then than he has become since, but already a sort of cowboy-type figure. Masa says, "I want to invest $100 million in Yahoo." This was at a time when nobody in the valley ever had seen $100 million check. It doesn't exist.</p><p>Jerry Yang said, "I don't need $100 million. I'm going to go public. We've already lined up Goldman Sachs to do the IPO, why do I need $100 million? Thank you, but no, thank you." Masa said, "If you don't take $100 million from me, I'm going to give it to your main rival, and they're going to outspend you on getting your search function or your directory on the home page of other apps. Think about it. Do you want my $100 million? I think you probably do."</p><p>Masa, waited while Moritz and Yang went off into some other room and had a little confab and they came out and said, "Okay we'll take the $100 million," because they believed that Masa was crazy enough to turn around and give it to AltaVista if they didn't. Mike Moritz swallows that, then he observes what happens, which is Sequoia, which did the series A investment in Yang, discovered Yang, made Yang into a media sensation by putting him on the cover of various business magazines, lined up the IPO, found the chief executive who would lead the company through that, did all of the work. They made less money on Yahoo than Masa did because Masa wrote so much of a bigger check.</p><p>Although Sequoia had a better multiple, in dollar terms they lost. Michael Moritz is somebody who does not care to lose. He's one of the most competitive people I've ever met. He said, "I'm not having this. I'm going to do my own growth equity operation at Sequoia." Right around the same time as that meeting with Masa, he became the leading partner along with Doug Leone at Sequoia. He was in a position to make company policy. What he did was insist that they should go into growth equity.</p><p>Then later when Masa came back with his vision fund, circa 2016-2017 or thereabouts, that's when Sequoia did an $8 billion growth fund, much bigger than before, precisely because they wanted to be able to provide follow-on capital to Sequoia Series A's companies without the danger that their own Series A companies would take money from Masa, whom Moritz hated. I'm not making this up. When I say hated, one of the things that Mike did, which he maybe shouldn't have done in my research was to give me some internal memos he'd written in which he compared Masayoshi Son to Kim Jong Un the dictator of North Korea.</p><p><strong>[00:32:05] Sebastian: </strong>He had fairly strong feelings about him. It was all about not having Sequoia Series A companies hijacked. Now, if you look at the story of Series D, E, F companies that did take Masa's money, like WeWork, or indeed, Uber, it didn't turn out so well because Masa said to these entrepreneurs, "I want you to be bigger, faster, crazier, never mind about being careful." You get a WeWork phenomenon where masses of capital are destroyed. I think Sequoia was right to prefer to keep Masa at arm's length, not just in ego terms but actual return terms.</p><p>Now, Benchmark is the parallel story. They were a Series A shop just like Sequoia. They believed in generating high multiples on low, small-sized checks in Series A. They thought about doing growth and they decided not to. They stuck to their guns. Then lo and behold, who was the Series A investor in WeWork which Masa hijacked? It was Benchmark. Who was the Series A investor in Uber which got hijacked, actually not just by Masa but by a whole bunch of people including the Saudi sovereign wealth fund? It was Benchmark.</p><p>Benchmark did these Series A checks. They were good investments at the beginning and they became bad investments later, particularly WeWork, because they didn't have the Sequoia strategy of defending themselves with their own growth fund. I think that tells us that there is a case for some diversification. Growth investing is different to early-stage investing. You need a different team, you need to train that team and make sure that they're not just doing a Series A type of mentality. I think it's clear that it's doable. Sequoia got it wrong the first couple of times, but by the third growth fund it was generating very good returns.</p><p>Accel seems to be doing okay, as far as I know, with its growth fund. I think this is a doable thing. It makes sense for Series A VCs to go for it.</p><p><strong>[00:34:08] Dan: </strong>Okay. In broad macro terms, in the grand sweep of history, we should expect asset management to continue to evolve like all of the history of business has. Some firms can do multi-strategy well. I'm curious about a case where a change of strategy went poorly. Specifically with John Doerr when he really focused the firm on climate tech in the mid-2000s. Do you view that as just an unlucky bet? Maybe Moritz made the good bet with growth equity and Doerr just made the bad bet with cleantech? Did he fall into a trap that investors like Moritz have a framework for avoiding?</p><p><strong>[00:34:42] Sebastian: </strong>I think one of the big surprises, when I did the research for my book, was that I hadn't expected the dynamics within the partnership of the venture capital fund to be so important and so interesting. I think what went wrong with John Doerr when he did that climate bet is that Kleiner Perkins had evolved to a point where the other big hitters that had been around in the 1990s, most notably Vinod Khosla, had left and set up their own VC shops.</p><p>John Doerr was left behind at Kleiner Perkins with nobody of his stature who could challenge him. When he got it into his head that cleantech was the thing, he went in way too big and with far too few checks and balances and cautions that might have arisen if he'd been doing this 10 years earlier, because there would have been partners who had the standing to say, "Wait, John, let's do some cleantech, but not that much. Just be a bit careful," because there were people at the time who were arguing, "Look, it's going to take a very long time for these cleantech bets to pay off. You're essentially betting in a way on the outcome of the 2008 election."</p><p>I think that's part of what went wrong with cleantech. There was an assumption that whether it was McCain who got elected at one point, or Obama who got elected, they both said they were going to do something about pricing carbon, or at least capping and trading it. Some of the cleantech bets, in that period in '08, went in on this assumption that the politics was moving in the direction of making cleantech a winner. Then what happened is the financial crisis hit, so Obama prioritized financial reform and never got around to climate change reform.</p><p>Part of it was an over-leveraged political bet. Part of it was just, too hard in one sector and not acknowledging that it was going to be-- just the nature of the technologies were going to take a long time to pay off. I say, I think, if you look at Vinod Khosla's firm, he did climate bets, but he didn't blow the firm up over it. Whereas Kleiner Perkins really did. They went from being the top venture partnership in the world in 2001 to not even being in the top 10 a decade later.</p><p>I think that tells us, because venture capital is about subjective early-stage judgments, where there are no quantitative metrics that would guide you about what a good investment is, it's very subjective, you're making these bets on two-legged mammals who walk into your office with a dream, you need smart partners around the table on Monday morning to challenge your judgment. Particularly in venture capital, more than in other investment disciplines, the partnership glue is super important for smart decision-making. I think that's what explains why John Doerr blew up.</p><p><strong>[00:37:39] Dan: </strong>In venture firms, how much of the success do you think is due to-- On the contrary, John Doerr was also the reason it was successful. How much of a firm success do you think is due to these outlier stars at the firm and just having 1 or 2 can carry you for 10 or 20 years versus an institution that you build where there's a culture and a thesis and a way of working together where you can maybe get by without the industry number 1 or 2 guy?</p><p><strong>[00:38:07] Sebastian: </strong>I'm very, very strong believer in the team story. If you look at who were the individuals who topped the Midas List in venture, they are frequent, not always, but they are frequently from a partnership where somebody else from the partnership is pretty high up as well. That was certainly true for John Doerr when he was doing really well in the 1990s, doing the Amazon deal, the Google deal, and so forth. Vinod Khosla, his partner, was actually doing even better than him. In 2001, Khosla was number one and John Doerr was number three in the world. I think that's totally not a coincidence.</p><p>Equally, if you look at Sequoia partners, who are frequently very near the top, there's a whole bunch of Sequoia partners at any given time who are in the top 20 or whatever. I think that's because of the need for balance within the team. Benchmark is a durable, very successful partnership. It's famous for having equal economics between the partners, meaning that if there are 5, let's say, 50-year-olds, who have been doing it for 20 years and they bring in a 30-year-old partner who's new to it, they will give that 30-year-old the same returns, the same compensation exactly as the experienced people are getting.</p><p>I told this story to a friend of mine who runs a private equity shop and he practically stopped walking and had to sit down for a bit. He was so shocked. The idea that he as the founding partner of his private equity operation would share equal economics with somebody 20 years younger than him, he just couldn't get his head around it. I explained to him, "Look, it's because when you're doing PE, your counterpart is the CEO of some big company who's probably 50 years old. The fact that you're 50 is an advantage, you're relating to them. In venture, you're often dealing with somebody in their 30s or even late 20s or something who's the founder and being young can be a big advantage, so you should give equal economics-</p><p><strong>[00:40:09] Dan: </strong>Oh, that's a good saying.</p><p><strong>[00:40:09] Sebastian: </strong>-to the new partner who comes in. Again, I think this is why investing partnerships and the shape of the investing industry changes with the ebb and flow of who the founders are. If we went back to a period where most venture upside derived from deep tech companies where you need a PhD at least in material sciences or some deep thing like this, and therefore, by definition, the youngest you could be is 27 and probably you are more like 40, then you would see the average age of venture partners go up. The capital structures move with the technology, which drives the nature of the companies that are being formed.</p><p><strong>[00:40:57] Dan: </strong>Yes. I hadn't heard that insight before. That's fascinating. Question on FTX and some more recent phenomenon. Sequoia had a glowing bio of Sam Bankman-Fried on their homepage where it talked about he's a great founder. It's just a one-pager on him. They caught a lot of negative press for it, but I'm actually wondering if you think this negative press was warranted, or in order for a firm to be successful, should we expect that they will be enthusiastic about some people that are right on the borderline of, for lack of a better word, too weird?</p><p><strong>[00:41:29] Sebastian: </strong>I think weird people is fine. That's part of why venture capitalists make the big bucks, they have to manage weird people because weirdness goes with genius quite often. Actually, that's a scientific observation, not just an anecdote. I've read things about how if you look at members of Mensa, the high IQ club, people who test very, very high on IQ tests are more likely to have depression, autism, and other challenges.</p><p>I thought initially this is just because if you're that smart and you're 10, all the other 10-year-olds are really dull. While they're playing sports at break in the schoolyard, you're solving a chess problem at your desk, and therefore you don't socialize. No, it's more than that. The biochemistry of this thing is apparently that the links go deeper than just-- It's beyond just nurture.</p><p>I think if you are looking for power law returns from individuals who are often one tail of the distribution, you should expect to see some unusual characters and you should be willing to work with them. Your job as the VC is in a way to compensate for their idiosyncrasies, to protect them from making mistakes that might arise from those idiosyncrasies. Clearly, many founders don't have these idiosyncrasies, but some do and one should work with that.</p><p>I think with FTX, the fault that Sequoia had was not backing somebody with a fantastically exciting hairdo, who played video games whilst talking to Anna Wintour. The mistake was not to ask basic due diligence questions about, so when money is transferred from Alameda, the hedge fund to FTX, the trading platform, who signs that? Where's the paperwork? Basic things like that any investor in any financial operation should ask, they seem to fail to ask that. Why did they fail?</p><p>I think that's partly pandemic, that they did the due diligence in a Zoom call. I believe that one of the lessons from the pandemic is actually that being forced to get on a flight to the Bahamas, in FTX's case, means that you have four or five hours or whatever it is on the flight to think about what are you going to ask, what are the pitfalls, how will you ask it, how will you suss it out? You see the person face to face, and if they're playing a video game, you probably notice. I don't know.</p><p>I think not doing it over Zoom might have saved Sequoia. Then the other thing clearly going on is that crypto was just insanely hot at the time. Sequoia had stayed out of crypto until the last minute and they wanted badly to get some skin in the game. They thought FTX was, at least in the crypto world, the closest to a blue chip that you could be and so they rushed in too fast for that reason as well.</p><p><strong>[00:44:37] Dan: </strong>Another big story in recent years is Reddit and meme stocks taking down Melvin Capital with the GameStop short squeeze. It's maybe one of the wildest stories actually ever in finance, if you think about it. Do you expect this to be a phenomenon that we'll continue to see going forward and impact how hedge funds or other asset managers allocate their capital, or do you think it was a blip on the radar and a one-time thing?</p><p><strong>[00:44:58] Sebastian: </strong>I think on that one, I incline a bit towards the one-time thing theory. When people in finance do something that is deeply irrational like buy stocks because of some meme, they get burnt and normally they learn a lesson. I'm not discounting the importance of Reddit or of Robinhood, but I do think that people on these platforms are intelligent and they learn lessons. The other thing to keep in mind is that I didn't know that Reddit would have happened. If I'm getting my timing correct, this was in the period when everybody had a stimulus check to invest. I think-</p><p><strong>[00:45:38] Dan: </strong>That's true, yes.</p><p><strong>[00:45:39] Sebastian: </strong>-that was driving quite a lot of the froth at the time. People today on Reddit and on Robinhood may not have quite the same cavalier attitude with the money they've got.</p><p><strong>[00:45:53] Dan: </strong>If a talented young relative came to you and said, "I've got my heart set on being an investor," and they're going to do a hedge fund or venture capital, let's say they're graduating today, which industry would you advise them to go into?</p><p><strong>[00:46:06] Sebastian: </strong>I think it does depend on personality. There's a huge personality gap between hedge funds, just to generalize, and venture capital. In hedge funds, you can make your money by looking at data on a screen and figuring out some quantitative relationship that other people haven't spotted or even a machine learning deeper approach. You can make money by judgments around political decisions, whether that's in merger arbitrage or whether that's in macro trading around currencies. These are things which you can pretty much do it from your desk.</p><p>On the other hand, venture capital is a social sport. You have to be out there meeting entrepreneurs, out there interviewing the first five people. That entrepreneur you backed last week might want to hire onto their founding team, you're typically very involved in the early hiring, vetting. You're just in meetings the whole time. You're chasing people down. The securities you're trying to invest in are actually people with two legs who will run away if you're not good at relating to people personally.</p><p>I think venture capital is a game for extroverts and hedge funds is often a game for introverts. When Louis Bacon, the macro trader, did very well and bought himself a private island, people joked that it didn't make any difference because he was already so insular. I think I would advise the young relative based on the personality type and skillset that they had.</p><p><strong>[00:47:38] Dan: </strong>What do you think is more irreplaceable, the elite VC firms or the elite hedge fund firms? Assuming Renaissance Technologies was wiped off the face of the Earth, it's very possible their trading strategy just wouldn't exist and that alpha just wouldn't be collected today. If you took Sequoia off the face of the Earth, I'm not sure you could make the argument that the same companies that are getting funded wouldn't find capital elsewhere. I'm curious how you think about that question of which is more impressive and which of the elite firms have more of a core differentiator in the two industries.</p><p><strong>[00:48:10] Sebastian: </strong>I think what you said is correct, in other words, that inventing a new hedge fund insight really takes some feet of intellectual originality because the markets are pretty efficient, as we discussed, and so to do something that gives you an edge means that you've got an insight that other people haven't really come up with or they haven't figured out how to express that insight in terms of trading.</p><p>You have to innovate and so I think that is more impressive, whereas in venture capital, to a first approximation, everybody's doing the same thing. They're interviewing founders, doing due diligence on them, trying to do a 360, calling references, figuring out the shape of the market and the term, the total addressable market that they might be going over. That's the formula. No, there are exceptions in venture capital.</p><p>Part of what I was trying to do in my book by describing the history is that every now and again a venture capital partnership does come up with a new idea and creates a new branch of the discipline. Whether that's Y Combinator, having the idea for an incubator with batches twice a year, that was a new idea. Growth equity as practiced in the 2010s was invented by Tiger Global and Yuri Milner from TST, and I described that process as well.</p><p>Other innovations might include the prepared mind approach, which Accel invented in the 1980s, where you deliberately think about, okay, so which new technology platforms are coming down the pike? How do I prepare my mind and think about what business could be built on top of this new platform? What entrepreneur would be likely to have the right credentials and experience to build that business? Therefore, what do I do I expect to meet in the next six months in terms of the right entrepreneur to back.</p><p>That was just a new way of thinking about venture, less ad hoc, more deliberative, which now has been spread around the valley and I think most people understand it. It was an innovation at the time which gave Accel, I think, an edge, at least until that secret source leaked out. It's not that there's zero innovation, but I agree with your core point that hedge funds is all about innovation whereas venture capital is less clear.</p><p><strong>[00:50:35] Dan: </strong>I want to shift gears a little bit over to your work on Alan Greenspan and the Federal Reserve. For folks listening, the Alan Greenspan biography is one of the best biographies I've ever read, period, so I highly recommend it.</p><p>My first question on this topic is let's say that a new Fed chairman emerged, so we're in a hypothetical here, and they understood exactly what needed to be implemented to guide the economy to have low unemployment, maximum productivity, et cetera, all the benchmarks that we look for in terms of outcomes, my question is, what would they run up against as the biggest bureaucratic blockers they would run into, especially if they're trying to implement something that is a little bit not historically typical, so maybe something other than the 2% inflation targeting? What is the bureaucracy within the Fed that they would face if they knew what the solution was?</p><p><strong>[00:51:23] Sebastian: </strong>It's an interesting thought experiment. Of course, nobody knows the solution to these things because the whole nature of central banking is that you are trying to guide an economy which itself is unstable and is changing in shape and all that stuff we were talking about, about technological change and so on. You have experience from the past that might lead you to a view about how to do central banking, but it's a dynamic system, and therefore you have to be dynamically adjusting your model. Certainty is not really a possibility.</p><p>To go with your thought experiment, supposing you could know for sure what to do, what would you face in terms of constraints? You'd have to get the Fed system on your side. There's a federal open market committee that meets every six weeks or so, and they vote on interest rate-setting decisions, monetary policy decisions. The chairman is just one vote, and so the chair has to pull the other members of the committee along. That typically means that you need a bit of time to make your argument, persuade people.</p><p>It's a combination of one-on-one diplomacy with the other Fed governors of the central Federal Reserve Board. Also, the presidents of the regional reserve banks alternate with each other, but they get to vote on interest rate-setting decisions as well. You need to win these people over. In turn, to do that, you probably need to win their staff over because central banking is a very technical priesthood and the staff economists who have been in the trenches building the models and generating advice for the principles, those people are deep experts and they are properly respected, as they should be, for their technical chops. It would be essentially a challenge of persuasion to bring the system behind you.</p><p>It's not good enough in a 12-vote committee to win the vote 7 to 5 because if you did that, then you'd also be having to think about the reaction of the markets. Monetary policy doesn't work in a vacuum. The Fed sets the short-term interest rate and then investors out in the wide world, not only in the US, but globally, look at what you're doing, and then they make trades in the bond market, which determine longer-term interest rates, which affect what you really care about, which is the cost of borrowing for businesses, and therefore the growth rate and so on.</p><p>Even if you could persuade the entire Fed system that you knew what you were up to with some unexpected interest rate decision, if you couldn't persuade the financial markets and that's a lot of participants, that you were on the right course, they would look at this and say, "No, that doesn't make sense. We don't believe that's going to be sustainable. The Fed's going to have to reverse this decision at the next meeting, and therefore we're going to go in the opposite direction to what the Fed wanted." It's actually a highly democratic system in the sense that you have to persuade a lot of actors what you're doing. That's why Fed communications is a whole specialty in the central banking world in and of itself.</p><p><strong>[00:54:39] Dan: </strong>This gets to my next question, and this one is similar to the earlier question I had for you on how much innovation we should expect in models for asset management. The challenges you listed give me the sense that innovation in the Fed is going to be very challenging, and there's got to be a lot of systemic and public opinion changes before an authoritarian Fed chairman could come in and just make a change to anything.</p><p>Just broadly speaking, how much innovation do you expect to see in terms of the Fed's strategy for stabilizing the economy in, say, the next 10 or 20 years, or should we expect it to be very slow-moving and to look very much similar in the future to how it does today?</p><p><strong>[00:55:16] Sebastian: </strong>It's not static, as I said. It's dynamic because the conditions of the economy change and that forces a need for some change by the Fed. It tends to be evolutionary more than revolutionary because quick changes don't allow for the consensus building that is necessary. I think the great counterexample, which people like to cite, namely Paul Volcker being appointed to be Fed chair in 1979 and then a few months later shocking everybody with a change of inflation regime from-- Essentially, he invented monetary targeting. Didn't invent it, but he adopted it for the first time at the Fed. This brought about the big Volcker disinflation. It was a radical change of direction.</p><p>That's the most famous, commonly cited example of a turn-on-a-dime shift, a revolution, not an evolution. Even that example, when you look at it a bit closely, you realize there's some qualifiers. First of all, Volcker didn't actually do the big shocker as soon as he got into office. He had already been president of the New York Fed, and therefore effectively the number two or number three in the whole central bank system before he became the chair. He'd been part of the team making this argument for a while even before he became chair.</p><p>Then he became chair and didn't do anything radical at first. It was only after a few months when the US inflation data and also the currency started to go crazy, particularly the dollar was crashing against gold, that there was a sense of panic in the country that something radical was really, really needed. At that point, he had the political consensus in his favor if he went radical. That's what he did. He seized the wind. He went with the wind behind his back, but there had been this period of preparation beforehand, as I say. Even that revolution had more of a evolutionary character than people remember.</p><p><strong>[00:57:19] Dan: </strong>Oh, interesting. Interesting. In what ways do you think the Fed has most evolved since Greenspan's time?</p><p><strong>[00:57:25] Sebastian: </strong>A huge difference is in communication. It used to be that the Fed said almost nothing about its reasoning for its decisions, and sometimes actually didn't even communicate the decision. That's an amazing fact today, but they didn't tell the markets when they had decided to change rates. Indeed, when Volcker, I said before, he started to adopt monetarism in 1979, he gave up in 1982. He didn't tell the markets, he didn't tell the world that he'd given it up for a few months. Then he let it slip in some obscure speech where he said in passing, "Oh, we just reconsidered this thing a bit, and for the moment, we're backing off. We'll see how it goes." It was very, very soft peddled.</p><p>Greenspan, when he became chairman in 1987, continued this Volcker tradition of being pretty circumspect. When the Fed changed interest rates, it would instruct the trading desk at the New York Fed to buy and sell short-term securities to manipulate the short-term rate. It wouldn't announce that and people in the markets in New York would get calls from the Fed and they'd figure out what the Fed had decided, but there was no announcement.</p><p>Then I think in 1994, the Fed began to announce its decisions, and so there was guidance and communication. Then in the 2000s, Ben Bernanke became a governor, not the chairman yet, but just the governor, and he brought with him a whole university perspective on the value of forward guidance, of communicating more rather than less with the markets.</p><p>If you do that, then, flipping what I said earlier on its head, the markets do understand what your reasoning is, and they do trade bonds in a direction that will change the interest rate up or down on the long-term end of the curve in a way that is consistent with your policy because you've explained to the bond markets why interest rates should be higher, you've moved up your short-term rate, and then the long-term rate will follow because you've made the case on the whole. Bernanke was very big on forward guidance. He began to get his way before he was chair. Then he became the chair of the Fed in 2006, and very much entrenched this. Yellen continued that in her term.</p><p>It's actually, funnily enough, I think, been slightly dialed back by Jay Powell who believes, and I agree with him, by the way, that there was some merit in the Greenspanian situation because the problem with forward guidance is if you tell people in advance what you're going to do, then it's only credible if you stick to your view over several meetings, and that makes you slow to react to new information.</p><p>One of the reasons why we had this high inflation in 2022 was that the Fed had seen that inflation was for real by late 2021, but it took about four more months to actually start to raise interest rates because it was felt that a complete turn on a dime would shock the markets too much, would violate the implicit contract in forward guidance, which is that you give people warning. The Fed allowed four more months of inflation to kick in and the Ukraine war started in that period, and then the inflation, which would have happened anyway, just got even worse than it might have been if they had been freed of the self-imposed constraints from forward guidance.</p><p>I'll stop on this, but it's worth recalling that Greenspan, when he was chair, was perfectly willing to move interest rates not only without a warning but in between meetings. He wouldn't wait for six weeks to the next meeting. If he felt like one week after the last meeting interest rates should be up a quarter point, he would convene a conference call and move rates with no warning whatsoever. We're a long way from that today. In general, we communicate more and the markets expect that, but that's a very profound shift from where we used to be.</p><p><strong>[01:01:31] Dan: </strong>What is your assessment of how Powell's done so far as Fed chairman? We're recording this on April 10th, 2024.</p><p><strong>[01:01:38] Sebastian: </strong>I'm pretty sympathetic to Powell. I have my criticisms, but my criticisms come from a place which is steeped in my study of Greenspan, and which is actually out of consensus from where the Fed was when he was going into COVID and faced this big challenge of the supply shocks from COVID. Let me unpack that a little bit. I never believed in forward guidance to the extent that Bernanke did for the reasons I've explained. I think it just boxes you in too much. When Powell was chair, I think he'd absorbed that. He had served in the US Treasury in a senior position when Greenspan was the chair. He admired Greenspan.</p><p>I happen to know that he read my book and he absorbed some of what I said there. I think he's just a pragmatic-- He's a lawyer by background, not an economist. He's less steeped in that academic economics, monetary policy, I would call it dogma. He was less of a forward guidance believer than either Yellen or Bernanke who were both academic economists. He was ready to make a shift, but the consensus on his committee and in the financial markets were so powerfully in favor of the Bernanke-Yellen model that I don't really fault Powell for not moving towards a more agile data-driven approach straight away.</p><p>Equally, I have another critique, which is that I believe that central banks should be willing in some cases to raise interest rates because there's an asset bubble. It's not always appropriate to do that, but when the economy is operating at full employment and there's no deflation that you're worried about because if there's deflation, you shouldn't be raising interest rates more. You see a big asset bubble, which I think everybody saw in 2021 during the COVID everything bubble, then you should raise interest rates in response to the bubble.</p><p>I'm not seeing inflation in the data like the consumer price index, the PCE index. That's not showing me inflation yet, but I'm seeing huge inflation not in the price of eggs, but in the price of nest eggs, i.e. assets. I am raising interest rates to dampen that bubble because I know the bubbles tend to burst and that's very ugly. If they had had that reaction function, which I believe they should, in 2021, they would have raised interest rates sooner. They wouldn't have waited till March 2022. Those are my two criticisms, don't be too dialed in on forward guidance, be willing to raise interest rates in response to asset bubbles.</p><p>I'd say that, otherwise, what happened to Powell was that COVID was just a super uncertain environment. Trying to foresee when the next wave of COVID, Omicron, Delta, whatever, those names that used to trip off our tongues and now we're massively forgetting, but nobody knew how bad the next wave would be, if there would be another lockdown, if that would because a supply shock from China or from some other thing, how COVID shy workers would be about going back to work, how long COVID would affect things. Don't tell me anybody could predict that.</p><p>We know they couldn't because look at the way that auto companies just completely dialed back on semiconductor orders thinking that their whole business was toast. Then there was a massive semiconductor shortage and everyone was scrambling for semiconductors. That just is one metric that tells you that the private sector was certainly no better at predicting what was going to happen than the central bank was. I think the classic familiar critique of, "Oh, Powell, you didn't see the COVID inflation coming out," and nobody saw anything about COVID, it was totally unprecedented in how it would play out. I really don't fault Powell for that.</p><p><strong>[01:05:36] Dan: </strong>Do you ever worry that the Fed will lose its political independence or do you sense that it already is?</p><p><strong>[01:05:40] Sebastian: </strong>I do worry. I don't think it is already. Biden's team has been very explicit in acknowledging the central importance of Fed independence. If the financial markets got the idea that the Fed was moving interest rates around not in order to control inflation, but in order to please the incumbent in the White House, then the financial markets would conclude that inflation will be higher than it would be otherwise, and they would extract a penalty in the form of higher long-term interest rates to compensate themselves for higher inflation. It's really a dumb idea to compromise Fed independence. The Biden team totally gets that.</p><p>Janet Yellen, the treasury secretary, was the former Fed chair, so she knows this better than anybody and she's not going to be part of an economics team that compromises Fed independence. Now, Donald Trump is a different situation. When he was elected last time, I thought that his wacky insistence on very high growth and his total disregard for the value of institutions would combine to create a very dangerous situation in terms of Fed independence.</p><p>I was pleased and surprised that actually his Fed chair choice was good in the form of Jay Powell and that he tried to get one inappropriate person on the Fed committee, but she was blocked. Basically, the appointments have been okay under Trump 1. If he were to be reelected, who knows? Then I would worry again.</p><p><strong>[01:07:10] Dan: </strong>Last question here. As I understand it, you studied history at Oxford. I'm just curious, generally, in terms of the broad sweep of business history, are there any historical periods or areas that you haven't covered in your professional work that you find especially interesting?</p><p><strong>[01:07:24] Sebastian: </strong>When I was at Oxford, I was obsessed with the question of why there'd be no Marxism in Britain. I studied late 19th century, early 20th century European history in a comparative way, so comparing Russia and France and Germany, those experiences with the British one. I haven't later in my life as a writer really done anything that goes back beyond about 1950. Greenspan was born before then, and I talk about his early life. A little bit of the 1930s and '40s. Essentially, my work is focused on the post-Second World War period. As a undergraduate, I was fascinated by the period before that, 1870 to 1940 or so.</p><p><strong>[01:08:09] Dan: </strong>Did that work influence your published writings at all?</p><p><strong>[01:08:13] Sebastian: </strong>I think it influenced it mostly in the methods I developed of intellectual inquiry. I had to synthesize, organize, make sense of large amounts of unstructured information. I think the distinctive thing about history as a discipline, and this is why I slightly regret, that it's, certainly in the US, less of a popular major than it is in Britain, I would say, and then certainly was even more so when I was an undergraduate. I think history has the feature that it does not insist as the first move on imposing structure on reality.</p><p>Look, political science, which tries to turn a bunch of 50 elections in different democracies into data and then turn that data into insight, not against it by any means, but I do think there is something qualitative, valuable in history, which begins by the other way around. It studies the details, makes sense of the stories, including the human components in the stories. Then from that, it tries to generalize.</p><p>It doesn't invent the categories before it's done the analysis. Just to stretch the point a little bit here, if you'll indulge me, there's a strain of thinking in artificial intelligence where you want an AI to learn concepts. One approach to that is you train the AI in a very unstructured environment. You don't put your simulated agent in what looks like a modern city with lots of right angles and standardized shapes.</p><p>You put it in a more natural environment where no one tree is the same shape as the other tree and everything is undulating and irregular in shape. That's the feature of the natural world. Then you invite the AI to try to navigate that natural world, which is much harder than a right angular, physically built human world. The notion is that the AI will learn things from the irregularities. It'll have to invent its own set of categories, its own set of concepts. It will have to adapt those concepts as it explores the natural world further. It's a different approach to understanding reality, which I think has value.</p><p>One piece of evidence for my view would be the way that after 2008, the value and the standing of economic history went up a lot. There had been this big push in economics towards essentially turning it all into data and modeling it. The nature of any model is that you say, "okay, we've got eight recessions and we're going to analyze these eight and try to find the regular things that show up in each recession so that we can predict the next recession," but every recession is different. If you ignore that part of it, you ignore something quite important.</p><p>I think what was realized is that the turning points in economic history which really matter for everybody, for investors who are trying to preserve capital, but also for ordinary workers and people just going about their lives, the stuff that matters is the huge inflection points like the 2008 crisis. That's what affects people's lives. If you're just predicting is next year's growth going to be 3% or 3.4%, nobody noticed that in their everyday life. It doesn't matter.</p><p>If you're looking for the inflection, then you need history because you need to more deeply study turning points. It's going to be individuals that make a difference, by the way, in these studies. I'm going on about this, but let me-</p><p><strong>[01:12:01] Dan: </strong>No, this is great.</p><p><strong>[01:12:02] Sebastian: </strong>-go back to my studies of history and to Marxism as one approach to history. Of course, Marxist history is structural. It believes in a deterministic view of how technological change drives economic change, drives business forms, drives political stuff on top.</p><p><strong>[01:12:23] Dan: </strong>Do you consider yourself a Marxist?</p><p><strong>[01:12:25] Sebastian: </strong>I was definitely influenced in my early student life by this thing, and I moved way beyond that. Now I'm a sort of anti-Marxist, but for the following reason. I'm not anti, but I think Marxian analysis, that deterministic structural analysis is only part of what we need to do. The other part is precisely looking for the stuff that isn't structural because that will turn out to be super important. If you think about the big geopolitical calls that many analysts, myself included, maybe got wrong in the last 20, 30 years, it's usually when an individual is way out of sample in his or her behavior and does something radically unexpected.</p><p>If you analyzed objectively, was it in Putin's interest to invade Ukraine? Clearly, it was not. It was going to isolate him from the West. It was going to mean that NATO would be enlarged. It was going to mean that lots of Russian kids would get killed. It would mean that his entire technology sector would decamp to Dubai. It was a really dumb thing to do. If you analyze Putin in terms of the forces acting upon him and how he would act as a rational person, you would have got the prediction wrong. He did it based on his own view of czarist and the Russian imperial history and so forth and so on.</p><p>If you think about Xi Jinping, same thing. He broke with the rational expectations model of what a Chinese leader would do. If you look at Donald Trump, far more consequential in terms of the path of American history than Biden or Obama because he's so out of sample and radical in his personal choices about the ideas that he pushes. It's the radical, unpredictable individuals who matter in business entrepreneurship and who matter in geopolitics. This is what history can capture, and it's something that political science utterly cannot.</p><p><strong>[01:14:23] Dan: </strong>Do you subscribe to the Thomas Carlyle, great man, theory of history? That's what you're saying here, but I'm curious if you're on it.</p><p><strong>[01:14:28] Sebastian: </strong>Yes. I'm subscribing to both. I'm saying that of course you want to understand the structural stuff, the Marxist stuff, the demographics are super important and all that. Of course, I don't want to ignore that kind of thing, but I think if you want to find, again, the inflection points, the unexpected changes, individuals make a difference.</p><p><strong>[01:14:49] Dan: </strong>Great. Sebastian, you've been very, very courteous with your time. Thank you so much for coming on today.</p><p><strong>[01:14:54] Sebastian: </strong>I enjoyed the questions. Thank you, Dan.</p>]]></content:encoded></item><item><title><![CDATA[Nabeel S. Qureshi]]></title><description><![CDATA[Film, Shakespeare, AI, startups, and more]]></description><link>https://www.danschulz.co/p/nabeel-qureshi</link><guid isPermaLink="false">https://www.danschulz.co/p/nabeel-qureshi</guid><dc:creator><![CDATA[Dan]]></dc:creator><pubDate>Tue, 12 Mar 2024 12:49:47 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/142536126/670fa1e22e1dc42a17fe26d07853370c.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Nabeel is a Visiting Scholar at the Mercatus Center focused on developing an optimistic vision for AI, but as you&#8217;ll see in this conversation his breadth of interests goes about as wide as you can imagine. We talk about foreign film, interpreting the Iliad, Shakespeare, Wittgenstein, Derek Parfit, SF vs. NYC, AI, startups, and a lot more.</p><div id="youtube2-6nZE2moM_LI" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;6nZE2moM_LI&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/6nZE2moM_LI?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8ad8711038fc0fa88f5879bcf1&quot;,&quot;title&quot;:&quot;Nabeel Qureshi&quot;,&quot;subtitle&quot;:&quot;Dan Schulz&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/5IBad1J6UZ6SiUiXd4d2Xk&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/5IBad1J6UZ6SiUiXd4d2Xk" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/undertone/id1693303954?i=1000648897632&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000648897632.jpg&quot;,&quot;title&quot;:&quot;Nabeel Qureshi&quot;,&quot;podcastTitle&quot;:&quot;Undertone&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:8641000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/nabeel-qureshi/id1693303954?i=1000648897632&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2024-03-12T11:08:16Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/undertone/id1693303954?i=1000648897632" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><h3>Timestamps</h3><p>(0:00:49) Nabeel's favorite directors</p><p>(0:02:59) Underrated regions for film</p><p>(0:04:33) Nabeel's favorite Hollywood movies</p><p>(0:06:20) What makes a movie visually inspiring?</p><p>(0:09:34) Shakespeare on film</p><p>(0:10:31) Miyazaki</p><p>(0:12:03) Paris, Texas</p><p>(0:14:34) Jia Zhangke</p><p>(0:16:12) When is the movie better than the book?</p><p>(0:18:06) CGI</p><p>(0:18:44) Robert Bresson</p><p>(0:20:30) Love in film</p><p>(0:21:50) Director's Nabeel hasn't been able to "get"?</p><p>(0:22:34) Film theory</p><p>(0:24:15) Do the arts matter for founders?</p><p>(0:26:08) Going deep on great books</p><p>(0:29:31) The Iliad</p><p>(0:32:24) Tolstoy and Shakespeare</p><p>(0:34:21) Henry IV</p><p>(0:37:51) Shakespeare&#8217;s Sonnets</p><p>(0:38:56) Secondary literature</p><p>(0:41:18) Rene Girard</p><p>(0:42:57) Norman Rush's "Mating"</p><p>(0:44:27) Watership Down</p><p>(0:46:13) Harold Bloom</p><p>(0:48:04) Sci-fi</p><p>(0:49:32) Authors Nabeel hasn't been able to get</p><p>(0:50:35) Tech pessimists</p><p>(0:54:40) LLMs and the big questions</p><p>(0:57:59) Peter Hacker</p><p>(0:59:58) Wittgenstein</p><p>(1:01:47) Derek Parfit</p><p>(1:03:55) Moral intuition</p><p>(1:05:13) Do EAs make good CEOs or founders?</p><p>(1:07:20) Selfishness</p><p>(1:09:43) Favorite albums</p><p>(1:10:26) Beethoven</p><p>(1:11:27) Modernized opera</p><p>(1:12:38) NYC</p><p>(1:13:38) Chess</p><p>(1:14:31) California</p><p>(1:17:57) Fashion</p><p>(1:18:57) Eating in NYC</p><p>(1:19:52) Travel</p><p>(1:21:40) High school</p><p>(1:22:30) Twitter</p><p>(1:31:05) SF and AI</p><p>(1:32:38) AI doomers</p><p>(1:34:28) AGI timelines</p><p>(1:36:07) Nabeel's LLM usage</p><p>(1:43:42) Science brain vs founder brain</p><p>(1:47:48) Iteration vs conviction</p><p>(1:51:02) Is art education or entertainment?</p><p>(1:54:53) Sabbaticals</p><p>(1:58:53) What makes Nabeel imbalanced as a personality?</p><p>(2:00:05) Cold emails</p><p>(2:01:43) Meditating</p><p>(2:04:09) Regret</p><p>(2:05:16) Reminding yourself you will die</p><p>(2:08:04) GoCardless and startup culture</p><p>(2:13:37) Ideas vs execution</p><p>(2:16:10) Peter Thiel and Alex Karp?</p><p>(2:18:38) Philosophers in tech</p><p>(2:21:19) Learnings from Palantir</p><p>(2:23:18) Conclusion</p><h3>Links</h3><ul><li><p><a href="https://twitter.com/nabeelqu">Follow Nabeel on X&#8288;</a></p></li><li><p><a href="https://nabeelqu.co/">&#8288;Nabeel's personal site&#8288;</a></p></li><li><p><a href="https://twitter.com/dnschlz">&#8288;&#8288;&#8288;&#8288;Follow Dan on X&#8288;&#8288;&#8288;&#8288;</a></p></li><li><p>Subscribe to Undertone on&nbsp;<a href="https://www.youtube.com/channel/UCtRjQ8EA3Via7KXzv9F173Q">&#8288;&#8288;&#8288;&#8288;&#8288;YouTube&#8288;&#8288;&#8288;&#8288;&#8288;</a>,&nbsp;<a href="https://podcasts.apple.com/us/podcast/the-world-of-yesterday/id1693303954">&#8288;&#8288;&#8288;&#8288;&#8288;Apple&#8288;&#8288;&#8288;&#8288;&#8288;</a>,&nbsp;<a href="https://open.spotify.com/show/59YkrYwjAgiKAVMNGWPaLE">&#8288;&#8288;&#8288;&#8288;&#8288;Spotify&#8288;&#8288;&#8288;&#8288;&#8288;</a>, or&nbsp;<a href="https://www.danschulz.co/">&#8288;&#8288;&#8288;&#8288;&#8288;Substack&#8288;&#8288;&#8288;&#8288;&#8288;</a></p></li><li><p>Watch or listen to previous episodes of Undertone with <a href="https://www.danschulz.co/p/tyler-cowen">&#8288;&#8288;&#8288;Tyler Cowen&#8288;&#8288;&#8288;</a>, <a href="https://www.danschulz.co/p/vitalik-buterin">&#8288;&#8288;&#8288;Vitalik Buterin&#8288;&#8288;&#8288;</a>, <a href="https://www.danschulz.co/p/scott-sumner">&#8288;&#8288;&#8288;Scott Sumner&#8288;&#8288;&#8288;</a>, <a href="https://www.danschulz.co/p/samo-burja">&#8288;&#8288;&#8288;Samo Burja&#8288;&#8288;&#8288;</a>, <a href="https://www.danschulz.co/p/3-steve-hsu">&#8288;&#8288;&#8288;Steve Hsu&#8288;&#8288;&#8288;</a>, and <a href="https://www.danschulz.co/">&#8288;&#8288;&#8288;more&#8288;&#8288;&#8288;</a></p></li><li><p>I love hearing from listeners. Email me any time at dan@danschulz.co</p></li><li><p>Share anonymous feedback on the podcast: <a href="https://forms.gle/w7LXMCXhdJ5Q2eB9A">&#8288;&#8288;&#8288;&#8288;https://forms.gle/w7LXMCXhdJ5Q2eB9A&#8288;</a></p></li></ul><h3>Transcript</h3><p>Note: The transcript was created before I added the intro, so timestamps will be off by a few seconds.</p><p><strong>[00:00:00] Dan: </strong>We're hoping to cover a lot today, Nabeel, so thank you.</p><p><strong>[00:00:34] Nabeel Qureshi: </strong>Thanks for having me.</p><p><strong>[00:00:36] Dan: </strong>Who are some of your favorite directors and what makes them great?</p><p><strong>[00:00:41] Nabeel: </strong>Yes, I'm really into film. I guess I would divide this into the obvious directors and then maybe some directors that I think are quite underrated. With what I would call the obvious directors, I think people like Ingmar Bergman, Yasujir&#333; Ozu. I really like film from Asia, so Abbas Kiarostami, he's an Iranian director, Edward Yang from Taiwan. Yes, I think all of them go very deep.</p><p>I tend to like slower cinema for reasons we can talk about. I think slower cinema, it forces you to engage your attention a lot deeper, it forces you to switch into a deeper mode of thinking, so a lot of the directors I like tend to be slower. Some directors I think are underrated. I really like the British director Mike Leigh. He has this very unique way of making his movies where he doesn't start with a script, which is very unusual. He starts with a set of characters and a scenario, and he gets his actors to improvise what's happening, the events, and then they write the script as they go.</p><p>What that results in is this very naturalistic, organic form of filmmaking, which I think is brilliant. Another one is John Cassavetes, who's an American director. He was active in the '60s and '70s. He's really similar, actually. His movies are really unusual. They're very jarring to watch because they don't use the classic Hollywood film language, if you will. He'll have a shot, and it's like three quarters of somebody's face or something, and they're talking off camera, and you're like, "What is this?"</p><p>I think one of the functions of art for me is this idea of defamiliarization, so taking what you normally think of as experience and then showing you why it's weird or strange in some way. I think both of these directors, and the directors I like generally, just force you to experience things in a very weird, jarring, unfamiliar way, and then that allows you to break through to a high level of understanding.</p><p><strong>[00:02:49] Dan: </strong>Okay. You listen to a lot of directors that are from all over the world. Are there any particular regions that you think are underrated for directors?</p><p><strong>[00:02:56] Nabeel: </strong>Yes. There's areas of the world that have had waves of great directors, right? People are familiar with Hong Kong. The '90s was a golden era. You had John Woo, Wong Kar-wai, all these directors. Taiwan had its famous new wave. I think there's a lot of those. Iran is a really good place for movies. Obviously France. I think Southeast Asia is one that I'm really excited by that maybe gets a little less attention, although increasingly it does get more attention now.</p><p>There's a director I really, really like. I think he's probably one of the best living directors called Apichatpong Weerasethakul, and he is from Thailand. He makes, again, these very slow, very mystical movies about characters, largely in Thailand, although his most recent movie was actually set in Colombia. His movies I think of as taking Buddhist ideas very seriously and very literally.</p><p>What if reincarnation was actually true? He likes to blur the boundaries between reality, vision, dream, memory, all of these things in a very poetic way. I think a lot of these more unusual ideas are starting to come out of Southeast Asia. There's another new movie I haven't seen yet, but I think it's like <em>Dreams in a Yellow Cocoon</em> or something like that that I'm excited to watch. It's also from Southeast Asia.</p><p><strong>[00:04:23] Dan: </strong>Oh, interesting, interesting. Okay, so what do you think about Hollywood movies? You mentioned you like slow. Hollywood is probably known for the opposite, very fast movies. Are there any particular Hollywood movies that you think are great?</p><p><strong>[00:04:36] Nabeel: </strong>Yes, I think Hollywood has its ups and downs. I think most people would agree that the last few years haven't been amazing for Hollywood, mainly because it feels like whenever I go to a movie theater, at least as of a year ago, it was a lot of Marvel movies and sequels and things like that. With that being said, yes, I think there've been a lot of good mainstream movies lately. I enjoyed <em>May December</em> pretty recently. I really liked <em>The Fabelmans</em>, which was made by Steven Spielberg. I think that's a super underrated movie.</p><p>It's really about the curse of being a genius or being talented in some way. It's about this big Jewish family with lots of spiky personalities. I think it's brilliant. Yes, I think going back, a lot of the classics people talk about are actually really good. I love <em>The Matrix</em>. I love <em>The Godfather</em>, parts one and two. I think the classic Hollywood movies are good. I do wish they would make more movies that were for thoughtful people. It did feel like you saw more of that in the '90s than you do today.</p><p><strong>[00:05:45] Dan: </strong>Yes. Yes. The '90s had a lot of Matrix-style movies, John Malkovich, <em>Eternal Sunshine of a Spotless Mind</em>. It felt like they really hit on that idea a lot.</p><p><strong>[00:05:55] Nabeel: </strong>Right. Yes. <em>Eternal Sunshine</em> is a good example. It's something people still talk about today. It was for a mainstream audience. It had Jim Carrey and Kate Winslet in it. It's beautiful, and it's intriguing, and it plays with all these deep questions, I think, right?</p><p><strong>[00:06:10] Dan: </strong>Yes. In my conversation with Scott Sumner, we talked a bit about movies. One thing that he noted particularly that makes him like movies better than TV is that TV focuses mostly on narrative, and movies put a lot more attention on the visual aspect. What do you think makes a movie visually inspiring?</p><p><strong>[00:06:28] Nabeel: </strong>Yes. I think the important thing about movies is that they take place in time in this very contiguous way. Your attention is forced onto the screen the whole time, and you're experiencing time at the rhythm that the director wants you to. Like I mentioned, a lot of the directors that I like to watch, they have these very long, slow takes because I think they're trying to get you into this rhythm where you're paying attention to ordinary life in a new way.</p><p>I think of the visually striking thing as a piece of that. I think in everyday life, you're like, "Oh, that's a tree. That's a desk. That's a chair." They don't look strange. They look normal. I think the most talented directors, they will take very ordinary objects and make them look strange. Maybe a good example is if you watch Tarkovsky's movie, <em>Stalker,</em> it ends with this unbelievable sequence. I don't think it's like a super plot-driven movie, so apologies for any spoilers, but I will spoil the ending.</p><p>At the end, he just cuts to this shot of this little girl, and she's sitting at a table much like this one, and it has a glass on it. A train rumbles by outside. A Beethoven symphony starts playing, and then the girl stares at the glass, and the glass starts moving along the table very slowly. This just goes on for a while. It's implied that the girl has psychic powers, maybe because it's near Chernobyl or whatever it is. I felt like that's a really nice example where it's a girl, a table and a chair, but all of your attention is strained onto these ordinary objects.</p><p><strong>[00:08:21] Dan: </strong>Yes. Yes. In movies like this, one thing that Tyler says a lot is that, he thinks 2001, he got to be on the big screen. Do you agree that these types of slower, more visually inspiring movies are better seen on the big screen, or are home theaters good enough that you can get a setup that works just as well?</p><p><strong>[00:08:38] Nabeel: </strong>I think home theaters are definitely good enough. The big screen always helps. I do try and go to the theater myself a lot just because I think that watching it concurrently with a lot of people is its own unique experience. It is a little bit like participating in a conscious dream. It's important to the experience. I think that there is some barrier to stopping it, whereas I think to take the other extreme, if you're watching it on your phone on a train or something, you keep getting interrupted. You don't really get in the flow.</p><p>I think the important thing about movies is they are like waking dreams. To the degree that the environment can simulate that, it's great. I think if you have a really good home theater setup, you have a good sound system, you have a big screen, good for you. I think that's a really good way to do it as well.</p><p><strong>[00:09:24] Dan: </strong>Are there any great Shakespeare film adaptations? Any favorite ones?</p><p><strong>[00:09:29] Nabeel: </strong>Yes, I really like the Baz Luhrmann <em>Romeo &amp; Juliet</em>. I think it's phenomenal. It does a very good job of reimagining without doing violence to the original play. I especially like the way he rendered Mercutio. I thought that was really cool. That's one. I haven't seen too many. I do like Kenneth Branagh in general. He's a pretty famous English Shakespearean actor who's done a lot of movie adaptations.</p><p>I remember in school they showed us his adaptation of <em>Much Ado About Nothing</em>. I remember really enjoying that. I don't know, Shakespeare on film, I don't find that it has quite the same power for me as Shakespeare in the theater still. The most visceral experiences I've had with Shakespeare have been live in the theater, and it's bittersweet because they're impossible to recreate.</p><p><strong>[00:10:22] Dan: </strong>Another director you cited that you love before, Miyazaki. What do you think that he understands that maybe Disney, other animation studios are overlooking, and they don't quite get?</p><p><strong>[00:10:31] Nabeel: </strong>Oh, yes. This is one of my favorite topics. I think Miyazaki just makes movies for adults that are also for children. He really takes children seriously as full beings, if you will. That's very important. If you watch interviews with him, he's always saying, I think kids have a very good sense of the issues that we think of as adult issues. Life and death is a simple example. Even a movie that's relatively on the child-like side of his canon, like <em>My Neighbor Totoro</em>, it's actually a pretty serious plot because the mother is on the verge of death, and she's sick the whole time. It's showing how these two children cope with that.</p><p>Another example is <em>Kiki's Delivery Service</em>. It's charming, right? It's this teenage girl, she's going to become a witch, and she's going to learn to fly. I feel like Disney would take this in a very whimsical childlike direction. Actually, it's a drag, she moves to this Stockholm-like city. She has to get a job and work. It's a grind. She gets sick. Nobody cares about her. There's all these things that happen that you wouldn't really expect to happen in a kid's movie. Yes, I think his secret is he takes children very, very seriously, which I think most adults do not by default. He makes movies for children as though they were fully conscious beings.</p><p><strong>[00:11:53] Dan: </strong>Another movie that you cited that, I think, was on the top of your list from last year is <em>Paris, Texas</em>. What do you think this movie says about America?</p><p><strong>[00:12:03] Nabeel: </strong>I think it's more about how Wim Wenders looks at America.</p><p><strong>[00:12:08] Dan: </strong>He's German, right?</p><p><strong>[00:12:08] Nabeel: </strong>He's German. Exactly. Thank you. For context, it's set in Texas. Part of the movies, there's this place in Texas called Paris, and the characters often talk about it in this yearning way. The whole movie's shot in this very beautiful, hyper-saturated way. There's a lot of these shots of classic Americana. Diners, Coca-Cola vending machines, the Texan landscape, and it's done with this very loving eye.</p><p>It resonated with me. I immigrated here in 2017. For any immigrant to America, the classic images of America are very compelling. The first time you come to New York, you're like, "Why is this so familiar? I've seen these yellow taxis. I've seen these green street signs." He plays with that a lot. You never quite lose that. If you're an immigrant, and you like it here, you always appreciate the Americana. I just thought it was a very beautiful take on it by a foreign director.</p><p><strong>[00:13:11] Dan: </strong>Do you have any thoughts or analysis on the the famous last scene?</p><p><strong>[00:13:15] Nabeel: </strong>Yes, you mean when he's--</p><p><strong>[00:13:16] Dan: </strong>Yes, when he's in the phone booth.</p><p><strong>[00:13:18] Nabeel: </strong>He's driving away?</p><p><strong>[00:13:20] Dan: </strong>I'm talking about when he when he revisits his wife.</p><p><strong>[00:13:24] Nabeel: </strong>Right. Oh, oh, oh.</p><p><strong>[00:13:26] Dan: </strong>He's sitting in the one-way mirror booth. Yes.</p><p><strong>[00:13:29] Nabeel: </strong>The climactic scene.</p><p><strong>[00:13:31] Dan: </strong>Yes.</p><p><strong>[00:13:31] Nabeel: </strong>Because then he reunites her and the kid, and then he [crosstalk]</p><p><strong>[00:13:37] Dan: </strong>Personally, the reason I asked, I thought that the one-way mirror scene was I was just stunned through the whole thing. I thought it was [crosstalk] pieces.</p><p><strong>[00:13:44] Nabeel: </strong>Yes, me too. I was completely blown away by that. I was riveted. I think one thing that's underrated about that movie is the script writer. I might be getting his name wrong, but I think it's Sam Shepard. He's a famous playwright. I think the way that that monologue is scripted is brilliant. Yes, the way he plays visually with the fact that it's a one-way mirror, and then the way that faces converge. To your earlier question around the visuals, it's this visual way of conveying that they're forever tied together in this spiritual way. I was stunned by that. I'm glad you liked it, too.</p><p><strong>[00:14:25] Dan: </strong>How important in Jia Zhangke's films, how important do you think the Chinese context is?</p><p><strong>[00:14:31] Nabeel: </strong>Oh, in Jia Zhangke, yes, I think it's very important. He makes Chinese movies for a Chinese audience. It is also for a Western audience. Sorry, I guess I should revise this answer. I don't think knowing the Chinese context a ton going in is that important to appreciating the movies. I think you learn about his view of the Chinese context by watching those movies.</p><p>For example, actually, last night, I watched a movie called <em>The World</em>, which is about a park that was constructed in Beijing. It's meant to be a park that has all these world landmarks in it. They have the Eiffel Tower. They have the Arc de Triomphe. They have Big Ben from London. They have a replica of Manhattan there. They're all at one-third scale. A lot of the characters who work there, he shows their lives in a lot of detail.</p><p>This is done in 2004. Very much early hyper modernization, China. Yes, the characters have these moments, they're always joking about visiting Paris, and there's no chance that they're ever going to go to Paris because these are migrants who moved to Beijing to get jobs. There's a beautiful scene where the plane flies overhead, and these two characters are just watching it. They're like, "Do you know anyone who's ever been on a plane?" The other one's like, "No."</p><p><strong>[00:15:54] Dan: </strong>Oh, interesting.</p><p><strong>[00:15:54] Nabeel: </strong>I think you get all of this from watching the movie, but it does help to know a little bit about China.</p><p><strong>[00:16:01] Dan: </strong>Yes. What does it take for the movie to be better than the book?</p><p><strong>[00:16:06] Nabeel: </strong>Interesting question. I guess some examples I can think of where I felt that the movie was at least as good as the book. <em>Lord of the Rings</em> has to be mentioned. I love the books. They're very important and profound, I think. Those movies are absolutely insane. I'm trying to think what else. I don't know. Maybe the Godfather, actually?</p><p><strong>[00:16:28] Dan: </strong>Oh, for sure. Yes. What do you think it was about the <em>Lord of the Rings</em> that made it-- Was it just the fact that it's a great movie, and a great movie is better than a great book? Is there something particular about that story that fits better in movie format, do you think?</p><p><strong>[00:16:44] Nabeel: </strong>Yes, I think the best movie interpretations of books really make it their own but somehow do feel very true to the book as well. We all have this sense when we watch a movie adaptation, if the character's not quite right, or if the casting is wrong. I remember with <em>The Lord of the Rings</em> version, Elijah Wood was Frodo. Aragorn just looked exactly like I imagined Aragorn to look.</p><p>I think they nailed the casting and nailed those elements of the book. I don't think they fundamentally altered anything in the book that's super important. They really took advantage of the special properties of film. The scale, the way that they did away with CGI and constructed everything live is really, really impressive. Yes, I think essentially, an adaptation has to be its own work of art and work on its own level. Sometimes it does it better than the book. I think <em>The Godfather</em> is actually a really good example where I feel like the original is, they're almost poppy thrillers. Coppola takes his own experience of immigrating to America as an Italian and turns them into something more.</p><p><strong>[00:17:57] Dan: </strong>Yes. Speaking of CGI, what is your view on this? Should all movies be constructed like <em>The Lord of the Rings</em> and just forego CGI in favor of really expensive and complicated theater setups? What makes for a better movie?</p><p><strong>[00:18:12] Nabeel: </strong>I feel like I've seen fine examples of CGI, so I don't have a strong opinion here. I think any good artist or director is going to take these things and turn them to a good purpose. It is a bit of a turnoff when a movie has too much CGI, and it's super obvious. For example, I enjoyed <em>Avatar,</em> and that's almost entirely CGI, isn't it?</p><p><strong>[00:18:34] Dan: </strong>Yes. Did Robert Bresson exceed Tolstoy in <em>L'Argent?</em></p><p><strong>[00:18:39] Nabeel: </strong>I don't think so. <em>L'Argent</em> is inspired by Tolstoy's short story. I think it's the, I want to say the Kreutzer sonata. Do you remember which one it is?</p><p><strong>[00:18:51] Dan: </strong>Yes, I think I believe that's it.</p><p><strong>[00:18:53] Nabeel: </strong>Yes, and I think not, basically. <em>L'Argent</em> is about this cold-blooded, ruthless killer. I feel like he's getting at the same territory that Dostoyevsky or the late 19th-century Russian existentialist adjacent novelists got to. It's still a really good achievement, but the Russians can explore, I think, more of the territory of things like Christianity and the transcendent. They dive deep into the intersection of the low elements of life with religion and Jesus and how he spoke about the poor and things like that.</p><p>I think books are actually really good for that because you can bring in text, and you can play with these intertextual references. I think in the movie, the killer is just so cold-blooded, and you don't really get a sense of what's behind him or what's driving him. Bresson does hint at these transcendent elements in the way he photographs things and in the cinematography, but I feel like he can't explore it in a way that's intellectually satisfying. Maybe that's an example where it would have worked better as a novel.</p><p><strong>[00:20:06] Dan: </strong>Oh, interesting. Yes. These sort of deeply philosophical, deeply inner life, like why is this serial killer doing what he's doing, you think lends better to some of the Russian novelists then.</p><p><strong>[00:20:16] Nabeel: </strong>I think so. Yes, I think so. Yes.</p><p><strong>[00:20:18] Dan: </strong>Okay. Interesting. What film or films do you think showed the best insight into love and relationships?</p><p><strong>[00:20:26] Nabeel: </strong>To love and relationships. I don't think I've found anything that's too satisfactory on this front, which is a weird answer. I think there's a general problem in art, which is it is very difficult to convey happiness and stability.</p><p><strong>[00:20:46] Dan: </strong>Oh, interesting. Okay.</p><p><strong>[00:20:47] Nabeel: </strong>Everything is about people breaking up, having arguments, cheating on each other, et cetera. There's lots of interesting explorations on that. <em>Anna Karenina</em> is a really good book about this, for example. I think the question I'm really interested in is what makes a good relationship, what makes a healthy relationship and what are some vivid examples of people having that.</p><p>For some reason that doesn't lend itself to good art. The philosopher Agnes Callard has this essay where she says art is for seeing evil, and she thinks the purpose of art is just to give you visceral experience of things going wrong. Maybe to develop your moral intuition. Yes, for me, I think film has endless rich examples of how things can go wrong. The characters are a little bit crazy or a little bit unstable, and it's entertaining. It's fun. It's deep to reflect on. For you in your life, you want to know what makes relationships go well.</p><p><strong>[00:21:41] Dan: </strong>Yes. Are there any directors that you feel like you've never been able to really get or grok, or you feel like maybe people you respect or think that they're really great, but it doesn't do it for you?</p><p><strong>[00:21:53] Nabeel: </strong>I haven't gotten as much into American filmmaking in the '50s to '70s, I would say as I would like. Things like the classic movies with Cary Grant, Westerns, John Ford, things like that. They're fun, but I'm not obsessed with them, and they feel a little dated to me maybe. I think I need to go back and revisit, but those have never resonated that much.</p><p><strong>[00:22:24] Dan: </strong>Okay. You recently did a post where you said, "Hey, here's a bunch of books you can go read if you want to get really into film." I'm curious, how much does one actually need to understand film theory to really enjoy this? If I want to go enjoy Tarkovsky and really be blown away by that last scene where the girl is Beethoven symphony, psychic powers, do I need the film theory, or does it help enhance the experience, or can I just go watch it, enjoy it?</p><p><strong>[00:22:47] Nabeel: </strong>You can definitely go watch it and enjoy it. I'm very skeptical of film theory. I've read these more abstract takes on film and all of that, and I'm generally skeptical of them as being an aid to understanding. I think a lot of film theoretic academic writing is like you take a academic who maybe has a slightly Marxist slant, and they write some stuff about how <em>L&#8217;Argent</em> expresses, the idea of Marxism, whatever. It doesn't really help you understand it. They're just trying to publish a paper.</p><p>I think most of the books on that list that I posted are not film theory. They're more like interviews with practitioners or directors. The first one was <em>Cassavetes on Cassavetes</em>, which is just a series of long interviews with John Cassavetes about how and why he makes movies. It's just him ranting about how terrible most movies are and what's important in art and things like that and how the ancient Greeks influenced him.</p><p>I think that's really interesting. I think more generally, good criticism does deepen your appreciation of stuff. You've probably had this experience where you've watched a movie, you felt there was something great there. It moved you, but you couldn't really explain why. I think a really good critic, you can read, and they will tell you maybe a bit more about why you felt that way.</p><p><strong>[00:24:05] Dan: </strong>Do you think a deep understanding of the arts or everything we've just been talking about, someone who's really into film but also maybe literature, music, et cetera, would this make people better founders or leaders, or do you view this as just like another side hobby?</p><p><strong>[00:24:16] Nabeel: </strong>[chuckles] I think there's no correlation basically. I think there's a bunch of effects that work in opposite ways, and these net out to such that it's not an important predictor. What I mean by that is, you could say, "Well, if someone's really into literature and film and the arts and so on, then they're not going to be a good founder because the best founders are monomaniacs, they're obsessed with one thing, they don't care about anything else. If there's somebody who's going to the theater all the time, how much are they really dedicated to building their company?</p><p>That's valid. I think a lot of the best people I've seen at a given thing are very narrowly into that thing and not into anything else. On the other hand, somebody who's good at those things, and I think this is someone like Tyler Cowen's investment thesis with Emergent Ventures. I think he does fund a lot of these people who, is that those people are good at cracking cultural codes, which is a Tylerism, and that is a skill that can be very helpful in certain contexts. For example, I think if you're doing something like enterprise sales, it can actually be really handy to be able to speak the customer's language, especially if it's in a domain area that you're not super familiar with.</p><p>Various times in my career, I've had to go into a new industry that I didn't know so well. One example is I spent a year working in an airplane factory at Airbus, and I don't have an aeronautical engineering background. I think if you're the kind of person who's good at cracking these codes very quickly, you can learn to speak the customer's language very fast, and that helps you be successful in that domain. These two effects work in opposite ways. I don't think it nets out to anything super significant, but I'm always interested in that question.</p><p><strong>[00:25:58] Dan: </strong>Yes. You've mentioned before that you went from being the type of person who says, "Hey, I'm going to go read 52 books a year." You're tallying them up to saying, "Hey, I'm going to read 1 or 2 books, spend as much time as I need with it. Maybe read a lot of secondary literature on it." What are some books that you feel like you've been rewarded for doing a super careful reading on?</p><p><strong>[00:26:18] Nabeel: </strong>Yes. It's ironic with that one because I've actually gone back the other way, and I'm going for over 100 books this year. I think there's a periodicity element here. I think there's times in your life where you just have to take four to six weeks and just hunker down with a divine comedy and go through it slowly and read a bunch of secondary literature. Then there's times when you need to read four novels and four nonfiction books a week and just see what's out there. I think I just cycled through phases. A few books in my life that I've done this with, so the <em>Iliad</em> was one. I did that last year.</p><p>When I was a teenager, I did this with <em>Ulysses</em> actually. I read all the chapters, and then there's a couple of really good readers guides to Ulysses that people have written where they explain all the references. You need them because Ulysses is very, very willfully obscure, and he doesn't explain anything. Unless a lot about, 19th, 20th century Dublin, it's going to be very hard to understand. Yes, Iliad, Ulysses, I think authors like Dante, Shakespeare, they all reward this. I think one of the more important things in life, if you're into things like literature is going really, really deep into a few of the great works and getting that super deep understanding of them.</p><p><strong>[00:27:32] Dan: </strong>How do you do this in practice? How do you know when you've gone deep enough?</p><p><strong>[00:27:36] Nabeel: </strong>Yes, I think the ideal way to do it is to read it once and read it fairly quickly if you can and then take a while to write out your thoughts and stuff that stuck with you. Look up things along the way if you want, but don't let it slow you down too much. A mistake I see people making is, they need to understand every single reference. They're reading a Shakespeare play. Every line has five annotations. They're like, "Oh, I need to look up what bodkin means," or whatever. It doesn't really matter.</p><p>Just read it fast and then take the list of things you did. Then with that as a lens, go through some of the secondary literature that you find. The way to find that secondary literature, I think it builds up your expertise on this over time, but it's like, take some authors that you trust or take a book that you trust and then look up the bibliography and what books they reference a lot and follow that chain recursively.</p><p>Go dive into the secondary literature, see some of the things that the critics are talking about, assess whether you agree or disagree with them. Then ideally you write your own piece or essay on the book or whatever it is. Then I think you go back and read it a second time with all of that knowledge. That is the richest reading. The best thing about these great works is that you get increasing returns. Every time you read them, you gain more from that. It's a lifelong thing. Every time is really fun.</p><p><strong>[00:28:58] Dan: </strong>Yes. One thing I always think about with these is like, it's like a positive feedback loop for making them great. Because if a great book has a bunch of secondary literature, and it's going to be more interesting to go really deep into, and then it's going to get more secondary literature. You can just keep going.</p><p><strong>[00:29:12] Nabeel: </strong>Yes. Although I guess this process can go too far sometimes. Sometimes there's just too much secondary literature on something.</p><p><strong>[00:29:19] Dan: </strong>Yes, or garbage secondary literature.</p><p><strong>[00:29:20] Nabeel: </strong>Exactly, yes.</p><p><strong>[00:29:22] Dan: </strong>You did this with the Iliad. How has your interpretation of that book changed over time?</p><p><strong>[00:29:27] Nabeel: </strong>Yes, I think it's less about having an interpretation and more about deepening your appreciation of certain parts. For example, I think knowing how the structure of that era of Greek society worked can be helpful in understanding. There's a really poignant scene where Hector says goodbye to his wife, Andromache, and he says goodbye to his baby before going into battle.</p><p>I think just reading a little bit about how society was structured and what was expected of men at the time, there's these famous sayings about how men's mothers would say, "Go to battle and either win or come back dead on your shield." The role of honor and things like that, understanding how that works is actually really helpful context. Then there's other sequences that I think first-time readers find very mystifying about the <em>Iliad.</em></p><p>Two examples are, there's this long sequence where they make a shield for Achilles and Homer spends pages and pages describing this. That's one of them. Then the second one is the catalog of the ships, which I think is pretty early on. I think it's the second or third book of the Iliad, and Homer just spends the whole time being like, "And then this ship came, and then this family was on it." It's pages and pages, and you're like, "What is going on here?"</p><p>I think when you read the secondary literature, you gain more appreciation for why these scenes exist and what makes them special. I think it's a mistake to have too much of a take or an interpretation of a book. I think it's helpful to have multiple interpretations and then hold them in your head at the same time. One other thing I want to mention on this is Simone Weil, the French philosopher. She wrote a really brilliant essay called <em>The Iliad, Or, The Poem of Force</em>, which I think everyone should read, even if they haven't read the <em>Iliad.</em></p><p>It's just such a genius example of criticism and how criticism can deepen your appreciation of a book. Her take is that the <em>Iliad</em> is all about the tragic effects of force. She refers to force as just compulsion. It's this very rich word in and of itself, but for her, force is a sword driving through somebody's skull. It's this impersonal thing that doesn't care who you are, what your past is, who you loved. It just crushes you.</p><p>For her, the Iliad is all about how force just steamrolls everybody, and nobody wins in the end. She examines the ending scene where the two sides reconcile, and there's a funeral pyre for Hector, and you're like, "What is the point of all this?" She links it to her conception of the Greeks and Christianity in this very brilliant way. I found that very rich, even though I think it's a mistake to take that interpretation and be like, "This is the only one of the <em>Iliad."</em></p><p><strong>[00:32:14] Dan: </strong>Yes. Okay, okay. Great, great. Okay. Tolstoy, he surprisingly said this, and I think you've tweeted this before, and he's quoting about Shakespeare's works. He said, he felt an irresistible repulsion and tedium and doubted as to whether I was senseless in feeling works regarded as the summit of perfection by the whole of the civilized world to be trivial and positively bad, or whether the significance which this civilized world attributes to the works of Shakespeare was itself senseless. What is he missing?</p><p><strong>[00:32:44] Nabeel: </strong>[chuckles] Tolstoy has this great essay on Shakespeare, and he basically says, "I don't get it. This guy's a scam. Everybody's insane. Shakespeare is bad." Then he goes through Hamlet and Macbeth, and he's like, "This is bad. This makes no sense. This part of the plot is stupid. Why does Hamlet take so long?" Yes, I think he completely didn't get it. I think it's because he had an incorrect view of art.</p><p>His view of art was that art has this moral purpose. It should teach you what is right, what is good. This is in his when he had a full conversion to Christianity and after that. He'd written <em>War and Peace. </em>He'd written Anna Karenina. He'd written these complex novels that to this day are the greatest example of novels that exist in the canon. Ironically, he then wrote a lot of, I would say, inferior art based on his theory of what art should be.</p><p>His take was, art should teach you the moral good. I'm going to write a bunch of parables about how there was this old man, and nobody liked him, and then somebody helped him. This is an example of Christian charity. It's like, they're nice parables, but everyone reads Anna Karenina, not the parables. I think his theory of art was wrong. Then this is the reason why Shakespeare robbed him the wrong way, is because you don't read a Shakespeare play and go, "This is how I should live my life."</p><p>You read a Shakespeare play and go, "This is shocking. People are getting stabbed in the eye. There's this fool spouting nonsense, poetry. What is happening?" It's pretty much the opposite of his theory, but I think his theory was just incorrect.</p><p><strong>[00:34:12] Dan: </strong>Got it. Got it. Let's talk a little bit about Shakespeare. A few questions on the Henriad. Was Henry IV wrong to focus so much on Hal's relationship with Falstaff?</p><p><strong>[00:34:22] Nabeel: </strong>Yes. I think the core of the play or part of the core of the play in some ways is the compromises you have to make in order to get into power. For everyone's context, Hal is the person who ends up becoming King Henry V, but in the Henry IV plays, he is Prince Hal. He's this wayward. He's going to be the successor to the throne eventually, but for now, he's enjoying getting drunk with all these miscreants and interesting unsavory characters in London.</p><p>One of them is Falstaff, who's this drunk, good for nothing but very eloquent, very fun guy. I think a huge part of the dramatic arc of the Henriad is Hal going into power and then spurning Falstaff. Shakespeare does this thing where you think Hal and Falstaff are good friends, and then Hal has this monologue where he's actually, I'm just making everyone think I'm really dissolute because then they'll see that I reform myself, and I become a king. Then you're like, "Wow, Hal's very Machiavellian. He's very manipulative." Then there's this famous scene where Hal becomes crowned Henry V, and Falstaff goes up to him, and he's like, "Hey, Hal, remember me, Falstaff?"</p><p>Hal's like, "I don't know who you are." He says, "I know thee not, old man." It's just really cruel rejection. Yes. I think this was part of Shakespeare's purpose in writing these plays is who you are when you're in power is necessarily not who you are in your private life. These two selves can be very far apart. You almost have to be manipulative in order to gain power and keep power. I think that's a lot of what he wanted to show with the Henriad.</p><p><strong>[00:36:04] Dan: </strong>Do you think that Hal made a mistake rejecting him?</p><p><strong>[00:36:07] Nabeel: </strong>No, I didn't think so. I think it was absolutely the right move. You can see the reaction from all the nobles around. They're a little bit nervous about Hal getting into power, and Hal's very smart about how he does this. He basically reassures them, "You were all friends with my dad. It's not like I'm going to kick you all out now. Actually, I'm very much going to continue the program. Falstaff is seen as this element of chaos within the fabric of England at the time. I think he had to reject Falstaff in order to assure his rule and assure that he was going to be a good ruler. Now, is it sad? Yes, I think it's sad, but he had to do what he had to do.</p><p><strong>[00:36:44] Dan: </strong>Yes. Orson Welles did a movie on this called <em>The Chimes at Midnight</em>. He's sort of obsessed with this idea where he says the central theme of Western culture is the lost paradise. You even see this today, right? The old times were better, but this has been going on in his view since the beginning of time, looking back at the old days. One thing that his interpretation was essentially that Falstaff's rejection symbolized the transition from the utopian area of merry England to this more cold, calculating, industrial revolution. Do you give any weight to that, or do you think he's missing something there?</p><p><strong>[00:37:17] Nabeel: </strong>I don't buy that at all. There's this view that, yes, life is getting worse at every stage. I think that just ignores a lot of the misery that existed before the industrial revolution. The industrial revolution caused a lot of good things to happen. People were happier as a result. I think if you have this mistaken view of things, then yes, that makes logical sense. Once you correct that, then I don't see it.</p><p><strong>[00:37:42] Dan: </strong>What are the best Shakespeare sonnets?</p><p><strong>[00:37:44] Nabeel: </strong>Ooh, wow. That's a good question. I think all the famous ones are really good. I don't think there's any that are super neglected necessarily. I really like the Sonnet 16, which has this really famous quatrain where the first line is, so should the lines of life that life repair. This is a quatrain that works at 10 different levels. You can literally give 10 different glosses of what this quatrain means, and it's endlessly rich.</p><p>The literary critic William Empson, who I'm a big fan of, has this famous chapter where he just analyzes this and how ambiguous it is. I think Sonnet 16 is a really good one. The sonnets are great insofar as the famous ones are just really good. There's the one that is always recited at weddings, which I think is 116, let me not to the marriage of true minds admit impediments. It's a really good one as well. The way he uses all these nautical metaphors in it is really rich. You can't go wrong with Shakespeare.</p><p><strong>[00:38:46] Dan: </strong>Do you have any recommendations for secondary literature on Shakespeare?</p><p><strong>[00:38:48] Nabeel: </strong>Yes, I do. William Empson, who I mentioned already, <em>Seven Types of Ambiguity</em>, has a lot of good analyses of Shakespeare's sonnets and poetry. Then there's this critic called Stephen Booth. He's very obscure. He was at UC Berkeley for a long time. He published an edition of Shakespeare's sonnets that I'm obsessed with. It's one where the sonnets are a thin slice of it, and then the footnotes are 500 pages.</p><p>His introduction to those sonnets is incredible because he has this whole theory, which he I think partially got from Empson, about how ambiguity and illegibility are key to poetry and what makes poetry good, which I have an essay coming up on actually. I think Stephen Booth's another one. Then I enjoyed a book recently called <em>The Shakespeare Wars</em> by Ron Rosenbaum, who's a journalist. It's a kooky book. He's clearly a little bit nuts, but he just goes to a lot of Shakespeare events. He interviews a lot of Shakespeare critics. It's about a lot of disparate questions.</p><p>There's the authorship question, there's the question of which performances of Shakespeare were the best. He, as a very passionate fan, just interviews a bunch of people about it. I would highly recommend that as well.</p><p><strong>[00:40:06] Dan: </strong>Are there any passages from Shakespeare that you have memorized? What makes them stick?</p><p><strong>[00:40:10] Nabeel: </strong>There are tons. I think one of the great things about Shakespeare is when you read him, you just get these snatches of the English language stuck in your mind, and you can't really explain why. Ever since I read it in high school, I always think about that Macbeth speech, which has tomorrow and tomorrow and tomorrow creeps in his petty base to the last syllable of recorded time, et cetera.</p><p>I couldn't tell you why really. It's a little bit miserable, but I think there's something magical about Shakespeare's language in that way. There's a lot of Hamlet that's really stuck with me. Even just at the very beginning of the play, they're waiting around, and it's really cold. I think it's one of the guards who just randomly says, "'Tis bitter cold, and I'm sick at heart." You're like, "Why does he say that?" Shakespeare never explains it. I always think about that. What was going on with that guard?</p><p><strong>[00:41:06] Dan: </strong>Yes. I'm not sure if this is a tech or literature question anymore, but is Rene Girard overrated or underrated?</p><p><strong>[00:41:14] Nabeel: </strong>[laughs] I think he is a little overrated in tech for me now. When Peter Thiel brought him to prominence in the tech world, I think he was very underrated. I do think he has a lot of very powerful philosophical frameworks that you can use to analyze literature and life. Mimesis, mimetic desire and all these things are important ideas. I don't think he explained all of human life in the way that he wants to have done though.</p><p>I don't think mimesis is a good account of how people come to desire things actually. I think it is true that some people have some mimetic desires some of the time, but there is also such a thing as what you truly want. I don't know how Girard really fits that into his framework. Then it's just a little bit suspicious for me that he is raised Catholic, French, and then his entire philosophical theory is about how Christianity is the one religion that explains everything, and it's the culmination of all religions.</p><p>I think that should make you a little bit suspicious, which is not to dis-Christianity obviously, but it just feels like very biased reasoning. He has this great theory about it and why other religions don't celebrate the victim as much, and then Christianity is the one religion that makes the victim the center of it. I think that's very interesting to think about. He makes these very sweeping claims to explain all of history via the scapegoating mechanism and things like that. I think they do apply to some scenarios, but there's a lot of life that doesn't fit into his frameworks.</p><p><strong>[00:42:48] Dan: </strong>What was your biggest takeaway from Norman Rush's <em>Mating?</em></p><p><strong>[00:42:51] Nabeel: </strong>[laughs] Mating was a very entertaining book. It's set in Botswana. It has this very erudite, very autistic female narrator who is a PhD student in anthropology, and she falls in love with this crazy guy called Nelson Denoon, who's trying to set up this mega project. He's trying to build his own city in the middle of nowhere in Botswana. He's trying not to run it. It's this matriarchy basically that's collectively governed, and he's very much trying to be the founder who founds it and then leaves. If that pitch alone didn't make you want to read the book, I don't know what will.</p><p>He writes this character is very, very obsessive and analytical about human relationships. She tries to dissect her relationship to Nelson and his relationship to his ex-wife and all these things in these very analytical ways. I think it will resonate a lot with people who felt like they had to do that. I think when I was a teenager, I was pretty socially awkward. Then I felt like I had to reverse engineer how human interaction worked a little bit.</p><p>Because I was always more into computers than people initially. I had to really just be like, "Okay, how does small talk work? When people talk on the phone, what are they talking about?" These were not obvious to me. I had to develop almost explicit theories of them. I think if you're that brain, if you're very analytical about human relationships, you should definitely read <em>Mating.</em></p><p><strong>[00:44:17] Dan: </strong>Why should listeners read <em>Watership Down</em>?</p><p><strong>[00:44:20] Nabeel: </strong><em>Watership Down</em> is a beautiful book. I think it came out of nowhere. The author is, this guy who just lives in this village in England. I didn't even know what the title meant. I thought it was about naval combat or something, but it's actually down as in there's a region of England called the South Downs and the down is-- I'm not even sure what it means, but it's a part of the countryside.</p><p>Watership Down is a particular place, and it's about rabbits. That initially put me off as well, but then I just started reading, and I was swept up in it. It's one of the big myths of the 20th century, actually. It's up there with <em>Lord of the Rings</em> for me and the <em>His Dark Materials</em> trilogy. It's this mythical story about the founding of societies as told through this warren of rabbits.</p><p>It's also a story about existential risk, actually.</p><p>[laughter]</p><p><strong>[00:45:10] Nabeel: </strong>Which I wasn't expecting. I won't spoil the plot, but the very beginning of it, you have this rabbit who's a run, and he has these prophetic powers. I think his brother takes him very seriously, but a lot of other rabbits don't. He basically says, "Something bad is coming, and we have to get out now. Otherwise, we're all going to die." They take this to the chief rabbit, and the chief rabbit just laughs at them.</p><p>Then he's like, "Never show your face to me again. I don't take you seriously at all." These two rabbits escape. They take a few true believers with them and then try and found a new city. That's the premise of the book. It's very relevant because you're like, "How would I have reacted in this scenario? This like guy with prophetic powers is claiming that everything's going to come tumbling down." I feel like we're all talking about, existential risks from AI now and things like this. It's actually quite resonant to those concerns.</p><p><strong>[00:46:04] Dan: </strong>You alluded to this earlier, but so Harold Bloom has this idea where he thinks that strangeness is the central thing to a work of art that makes it really, really great. Do you agree with this?</p><p><strong>[00:46:13] Nabeel: </strong>Yes, I do. I'm not a huge Bloom fan, but I think that's one of his more powerful observations. I've had this myself. It comes back to what I was saying before about art being defamiliarizing. I think art is about you typically interpret the world through these categories, these abstractions that you just live your day-to-day life with on autopilot. I think art forces you to shake these up a little bit and perceive things in a strange way.</p><p>Naturally, it makes sense that a lot of the great works of art are just a little bit weird. I think Paul Graham has this old essay of his where he talks about this as well, actually. I think it's Six Characteristics of Great Design or something like that. He goes through a bunch of examples of design that he likes. The original Porsche 911, I think is one of them. In each of them, he says that strangeness or weirdness is actually a characteristic of them. I think it's just, there's something a little bit funny about these things at first. There's something a bit weird about Shakespeare the first time you read it. It's deeply strange.</p><p>I think this is true of pretty much any good work of art you care to name. I think it's just because they force you to grasp things in this unfamiliar way. I think if a work of fiction or a work of art goes down too easy, or it's too legible, you should be very suspicious of it. You see these screenshots of poetry that go viral on places like Twitter or Instagram, and they're usually very easy. They have this conclusion that's a little bit moving or whatever. I think these are usually examples of bad art because they just give you this very simple emotion. Yes, I think the illegibility piece is very important.</p><p><strong>[00:47:54] Dan: </strong>What do you think makes for a really good sci-fi? Does it have different rules than other literature? Is it just good literature with science dripped on top?</p><p><strong>[00:48:04] Nabeel: </strong>That's a really good question. I don't think I have a developed theory of this, but some of the sci-fi I really appreciate is I really like Vernor Vinge. He wrote <em>A Fire Upon the Deep</em>, <em>A Deepness in the Sky</em>, and <em>Rainbow's End</em> and a few other books.</p><p>I think what makes sci-fi and his work so great is that they're written for highly intelligent people. They don't dumb things down. Another author I like along these dimensions is Greg Egan. I think he's an Australian mathematician. It's a similar thing. There's a lot of these rich ideas that came out of the 20th century. Physics, computer science, information theory, all of these things that just don't make it into literary fiction at all.</p><p>That's a limitation of literary fiction. I think really good sci-fi can take what is great about literary fiction and art in general but weave in these ideas from physics and CS in a way that is really stimulating. Someone who's clearly very great at this is Ted Chiang. I think things like <em>Stories of Your Life and Others</em> plays with-- I think it was the Sapir Whorf hypothesis about language. It's wrapped in this very moving story that works as art on its own way.</p><p><strong>[00:49:17] Dan: </strong>He's very Borges, I find. He's basically like sci-fi Borges.</p><p><strong>[00:49:21] Nabeel: </strong>Yes, I agree.</p><p><strong>[00:49:23] Dan: </strong>Are there any authors out there that you feel you've never been able to get similar to the question of film?</p><p><strong>[00:49:28] Nabeel: </strong>Yes. Gustave Flaubert is a good example. I really couldn't get on with Madame Bovary at all. I just found it dehumanized its protagonist. It's probably a translation thing. People say it's like the foundation of the modern novel. I've always been mystified by this claim, but the critic James Woods will argue this, for example. I think a lot of it is like, it's because the language is very precise, and he's very, very clear about what he's describing. Nabokov is a huge Flaubert fan. He thinks<em> Madame Bovary</em> is one of the best novels ever written. I think it's notable, Nabokov was fluent in French. Maybe there's something about it that I'm not getting in translation. When I read it, I was just like, "This is so tedious. The author just doesn't seem to like Bovary very much. None of the characters are very compelling or sympathetic to me." I do appreciate the precision of the descriptions. Overall, I found it a little bit boring.</p><p><strong>[00:50:26] Dan: </strong>We got into this earlier when we were talking about the Orson Welles theory of idolizing the past. You hinted at your views are much more, "Hey, Industrial Revolution, obviously great in so many ways. The past is cold and brutal." Now, there are some nuances on this. There's a couple of writers, some that you said you've read recently. Ivan Illich, Neil Postman, they have this flavor of what I'll call Brave New World, <em>Infinite Jest</em> has some of this, Ted Kaczynski even.</p><p>Really, like what they're commenting on, I view is sort of like two things. One, it's our overreliance on technology. Then two, it's just being addicted to entertainment, as negatives of technology. Actually, you could even view this as a more recent phenomenon in the 1900 when we had like washing machines and air conditioners coming in. We weren't so worried about people being addicted to VR or something. I'm curious, do you think that any of the folks I just mentioned, maybe specifically Ivan Illich or Neil Postman make valid points?</p><p><strong>[00:51:19] Nabeel: </strong>Yes. I think they both they both make valid points. I tend to be optimistic on these questions. I think both of them skew pessimistic, I would say, but I think we can learn a lot from their critiques. Illich wrote this book, <em>Tools for Conviviality</em>, which inspired a lot of ideas in open-source. His entire thing was that the tools that we use should be serving your creative purposes, they should be modifiable, they shouldn't be trying to manipulate you or do things that make you a consumer, basically.</p><p>I think that that's a really valuable lens to view technology through. When you look at things like the Vision Pro, I think it does have both sides to it, where it can be a powerful tool for producers eventually. For example, if you watch someone playing piano with it, it's pretty cool. There's also a way in which these things are quite dark. It does take you towards this <em>Wall-E</em> vision of the world where everyone's overweight, and they're drinking their soft drink, and they have these glasses strapped to their heads, and they're just imbibing entertainment constantly.</p><p>I do feel like that vision lives in me as well as a warning of how technology should not go. In reality, it's like, I think what happens is you put these tools in front of people, and then they use them as they want. I think this is the flaw in Illich's work is like, not everyone is actually that motivated to be a creative programming genius and using it in these boundary-defying ways. Actually, a lot of people do just want to consume, and that's why Netflix is a major public company. I don't watch a ton of Netflix myself, but there's clearly an audience for that stuff.</p><p>I do think these tools do have to serve everyone. I think a lot of people do have an unhealthy relationship to their phones. Although I do skew optimistic here, I think people adapt over time. Meanwhile, I think there's some growing pains.</p><p><strong>[00:53:17] Dan: </strong>Yes, you had a tweet that I loved a little while ago. I think this was you, and you said, if someone were to go back in the past into 1990, and say this warning about how to handle your phone, this is a real warning. It's for kids or something, it says, "If you're feeling tempted to look at it, try this strategy, put it down, and just count to 10, and don't look at it."</p><p>If you didn't know what a phone was, and you saw that this was a warning that people needed to read to not stare at the thing, you'd be like, what did we create? This is like a crystal ball or something. It's a little weird, and stuff moves fast.</p><p><strong>[00:53:49] Nabeel: </strong>I thought that was crazy. I just had this sudden moment of like, "What are we doing? We've made something so hypnotic." We both have friends who are very optimistic about technology. You can't tell me it's not a little bit creepy to watch like a six-year-old scrolling TikTok for two or three hours. I've seen that. There is something off about it. Where it's like, they haven't developed their own creative purpose yet, necessarily. This content is just so engaging. I do want people to put tools in front of kids that help them express themselves and make them more capable human beings, not just sitting there and watching engaging stuff.</p><p><strong>[00:54:31] Dan: </strong>Michael Nielsen wrote, "I find probabilistic language models surprisingly irritating in some ways. Surely a big part of thinking is to create meaning by finding ways of violating expectations. The language models can seem instead like ways of rapidly generating nearly content-free cliches, not expectation-violating meaning."</p><p>You note here that this is getting at the limits of what LLMs are for creating something truly original, like Shakespeare's plays or general relativity. My question to you is, will LLMs be able to give us answers to the big questions?</p><p><strong>[00:55:01] Nabeel: </strong>[chuckles] I think this is a really stimulating comment by Michael Nielsen. I feel like there's two stories that come to mind, which when I when I read this immediately. One is, there was a poetry teacher who used to teach poetry by, he would give his class a poem by Philip Larkin, the English poet. There's a line in this Larkin poem, which goes, "A hothouse flashed uniquely." It's this beautiful line, it's very unexpected.</p><p>What he did was he would blank out certain words. He would blank out the word uniquely. He would say to his class, "All right, you have this line, a hothouse flashed X, and X is an adverb. What adverb would you put?" He said, he taught this class for 25 years, and nobody ever came up with 'uniquely'. I thought that was such a good point about the value of poetry. You could do this with Shakespeare, too, like nobody's ever going to write "the multitudinous seas incarnadine," from <em>Macbeth.</em> That's just such a strange line. You would never come up with 'incarnadine'.</p><p>Yes, I think it's critical to things like poetry that you come up with the most unexpected word, but not literally the most unexpected word. You wouldn't put 'shrimp' right in there. That wouldn't make any sense. I think that's one thing to think about. I think that's what Michael's getting at is like, the best stuff often surprises you.</p><p><strong>[00:56:19] Dan: </strong>This gets to the strangeness in some ways too.</p><p><strong>[00:56:22] Nabeel: </strong>Exactly. Then the other story quickly is Christopher Alexander, the architect who writes a lot about beauty and art. He tells a story about-- I'll dig up the reference, but he has this painting, basically, and it's this old Italian renaissance painting. There's a character and they're collapsing next to a door, I think, and an angel is consoling them. It's got these normal colors in it. There's red and blue and white. Then there's this very incongruous stripe of black in the painting.</p><p>It doesn't quite make sense. What he asks you to do in his book is he asks you to cover up the black stripe. Then he asks you to notice how you feel. The point is that without the black stripe, the painting is nowhere near as powerful. Then he says, "Well, if you imagine you were looking at this painting as it is, would you have thought to paint a black stripe across the painting?" The answer is no. It's very jarring. I think that's such a brilliant observation. Again, it's just like the hothouse thing. It's like the greatest art is weird.</p><p>I think to your question, the LLMs, they're going to get very, very good. They're going to be very, very helpful. They're clearly being deployed right now and things like workflow automation, but they're not that good writers yet. I'm not saying they can't be. It does feel like there needs to be some other element there that is they're actually thinking about what the best word would be in this context, not just what is one of the more probable tokens.</p><p><strong>[00:57:50] Dan: </strong>Yes. What did you learn from Peter Hacker?</p><p><strong>[00:57:54] Nabeel: </strong>Peter Hacker was a professor of mine at Oxford. He taught me the later philosophy of Wittgenstein. Honestly, it's a little hard to summarize, he was just very, very, very about close reading the text. I think he was very opposed to certain philosophers like Kripke, who took Wittgenstein and then very much took it in their own way.</p><p>As an example, Wittgenstein has this whole section of the philosophical investigations where he talks about rule-following. He examines this philosophical issue of basically, imagine you just saw a game, take chess, say two people playing chess, you don't know the rules of chess. How can you infer the rules of chess from these two players playing a game? This is actually very relevant to modern AI, incidentally.</p><p>Some philosophers actually misinterpret Wittgenstein here in Hacker's view, which is, they think that what he's saying is, you can never figure out what the rules of the game are from this, because there's an infinity of rules that are compatible with what you're seeing. Hacker says, "No, this is nonsense. This is not what Wittgenstein was actually saying."</p><p>Learning to read Wittgenstein was the thing I took away from him. Wittgenstein writes in this very aphoristic way where you really don't get what he's trying to say sometimes. He'll tell a story that's this schematic thought experiment. He might throw one line at the end that's like, "Should we conclude X from this?" You're confused. You're like, "Okay, is he saying we should conclude X from this? We shouldn't conclude X from this? Is he being sarcastic?"</p><p>He'll make a joke remark, and then he'll switch to a completely different topic. You'll be like, "What was he actually saying here?" There's this value in having mentors for a lot of things like this, where it's hard to make your way on your own. I think just being a mentor and learning how to make sense of Wittgenstein was immensely valuable.</p><p><strong>[00:59:48] Dan: </strong>What attracts you to later Wittgenstein's work over his earlier? He's like one of these famous people who did a 180 at one point in his career. Why do you like the later Wittgenstein?</p><p><strong>[00:59:57] Nabeel: </strong>Yes, this is a rich question. I think there's a few reasons. First of all, I think his earlier work was just wrong and he realized that and then he just did a 180 and he wrote, "Why is my earlier work wrong?" I think the most banal answer is that his later work is more correct. I actually think he got some things fundamentally right. That's one.</p><p>I think two is like, he basically realized the value of the illegible in human affairs. I think his early work, it's very much the vibe is like, it's logical positivism. It's like all of human language can be reduced to these atomic propositions and everything is this neat logical structure and you can build it all up from there. He realized that breaks. A lot of his later work and especially his diaries, he has this book called <em>Culture and Value</em>, which is mostly taken from his notebooks, but it's about things like culture, music, religion, Shakespeare, all these things. I think you'd like it.</p><p>He has these really stimulating remarks on religion and Christianity. He was this tortured Christian where he couldn't quite bring himself to believe in the straightforward way that a Christian would, but he was obsessed with Jesus and the New Testament and the religion. He had these really interesting theories of how religion actually works, which is that it's not just about an explicit set of things that you believe. It's not reducible to a list of propositions. A lot of it is about the practice and the doing and the things you can't really express in language. I think that's very important. Illegibility is constantly underrated, especially by people who work in abstractions day to day. Actually, a lot of the most important parts of life are illegible.</p><p><strong>[01:01:38] Dan: </strong>What does Derek Parfit get wrong?</p><p><strong>[01:01:40] Nabeel: </strong>Parfit, I think he was a brilliant philosopher. He came up with a lot of really interesting ideas. I think he was mostly correct about personal identity, which is essentially through analytic philosophy, he derives this Buddhist view of personal identity. He says that my relation to my younger self is actually not categorically different to, let's say, my relationship to you. Maybe I'm closer to my younger self, but you can think of these as almost like three distinct people. I think that's very interesting.</p><p>I think his fundamental project was misguided though. He wrote this whole tome, which the point of it was basically that the three main ethical theories, so this is utilitarianism, deontology, and virtue ethics, they're all climbing the same mountain, but from different sides, but they're all trying to say the same thing. He spent pages and pages trying to prove this. I don't think he was successful. I think he got that wrong.</p><p>I think there are just are cases where utilitarianism gives you conflicting intuitions to virtue ethics and conflicting intuitions to deontology. These ethical conflicts are real. I don't think that you can reduce them all to the one true ethical theory. I think that's what a lot of his work, as I understand it, was trying to get at.</p><p>He made one interesting remark at one point where he said that the-- there's another English philosopher called Bernard Williams, and he said that Bernard Williams was his most interesting philosophical contemporary. Williams was very, very different to him. His whole thing is about the limits of ethics and how ethics actually can't tell you very much about the important decisions in your life.</p><p>Williams goes into a lot of like ancient Greek ethics. He's obsessed with Homer and things like that because he thinks that those traditions and the traditions in fiction are a lot richer than the theories that philosophers have come up with. I think that's a better take because Parfit very much worked within the strictures of utilitarianism, deontology, virtue ethics. I think life is just bigger than that.</p><p><strong>[01:03:45] Dan: </strong>From your perspective, the average person, they're not thinking in terms of utilitarianism, deontology, virtue ethics. How well-calibrated just do you think the average person on the streets moral intuition is?</p><p><strong>[01:03:57] Nabeel: </strong>[laughs] That's a big question. I am an intuitionist actually. I basically think that you have to stand on intuition in order to do any moral reasoning at all, which is to say when you're presented with the trolley problem or an ethical thought experiment, what do you do? You test out various answers against your inner compass, your conscience, your intuition, whatever.</p><p>I think that if you accept that, then you are an intuitionist. In that sense, I think that people have this moral intuition now. Does everybody have it to the same degree? No. I do think there is such a thing as trying to be a better person and a lot of the people I admire the most are very serious about trying to be a better person, but the trying to be a better person is always sort of by your own lights, if that makes sense, rather than through some external ethical theory. One way you could express it is, "Am I making my 13-year-old self proud of me?" I think that's a very powerful way to view it.</p><p><strong>[01:05:03] Dan: </strong>Do EAs make good CEOs or founders?</p><p><strong>[01:05:06] Nabeel: </strong>[laughs] I think generally no, [laughs] which isn't to say that they can't. There are examples of very good EA CEOs and founders. Holden Karnofsky is clearly brilliant. He founded GiveWell. It was a very misunderstood idea and most people didn't get it. Now it drives hundreds of millions of dollars of donations to neglected causes. I think that's an amazing achievement.</p><p>I do think with that being said, EA tends to attract a lot of people who are very idealistic, very philosophical, very abstract. They like to think about things in abstract ways. The meme that I've observed is a lot of nonprofits that are full of EAs tend to just, write Google Docs about important topics. They're like, "I'm going to figure out AIX risk. Let me open a Google Doc." Then they just write 20,000 words and then they feel like they figured out AIX risk.</p><p>The world is just a lot more complicated and I think you have to go out and do things and operate. They tend to neglect the knowledge that you get from real operational experience and overvalue the knowledge that you can get from philosophical reasoning. I think that's one piece. I think that what does help them a lot is EAs are very mission-oriented. They're very driven to do good by what they view as good in the world. That is something that a good founder needs.</p><p>There are a lot of really admirable EA organizations that exist. Even aside from GiveWell, there's one that is aiming to cure lead poisoning among children in the world. Super admirable. The founder seems very formidable. I think her name's Lucia Coulter. There are good EA's founders, but I think if someone comes to me, they're in their early 20s and their primary identity is I'm an EA, it tends to be that they think way too abstractly about stuff and they just need to get a job for a couple of years and learn how to do things. Then they'll realize that it's a lot. A lot of these things are a lot harder and trickier than they maybe gave it credit for.</p><p><strong>[01:07:11] Dan: </strong>You've talked before about how selfishness might be an under-discussed trait of genius. Miyazaki, I think, was noted, I don't know if it was one of his autobiographies or if someone said this about him, but he was noted for being a little bit selfish. Can selfishness be good in any way?</p><p><strong>[01:07:25] Nabeel: </strong>Yes. It's just one of those things. What it is selfishness correlates with being very driven to the purpose that you have and not letting anything get in your way. It is very possible to be like that while not being selfish. Someone I admire a lot is Paul Graham, for example. He founded YC. He invented the web application, practically. He's written his own programming languages. He's written a lot of very influential essays. I think his essays changed my life. They made me go into startups.</p><p>I admire him a lot. He seems like a very not-selfish person. He has a family he loves. He helps founders all the time. I don't think selfishness is necessary to be achieving big outcomes. It does seem to correlate a lot with some artists that we otherwise admire. Ingmar Bergman, Picasso, horrible person. Miyazaki is a good example. His son absolutely hated him and just thought he was the worst father. His take was just, he's a genius animator and he's the worst father on earth.</p><p>I'll give you a couple of examples for color. One is that his wife had her own vibrant career as, I think she was an animator. I could be wrong. She was very senior, basically. He basically just said to her, "Look, I need to focus on Ghibli, so I need you to quit your job and take care of the kid full-time." She was very unhappy, but he basically insisted and she did it. She was miserable as a result, I think. He didn't care. He just worked 18 hours a day, 7 days a week, and neglected everything else. I think he knows that he wasn't a good father.</p><p>Another example is his son, Goro, made a movie and his dad attended the screening. Then about 20 minutes in, Miyazaki pretty ostentatiously walked out of the movie and he spent the rest of the time just smoking outside and not watching it. When Goro asked him why, he just said, "I don't think it's very good." Can you imagine your dad doing that to you? [laughs] There was clearly something messed up in his head, but on the other hand, he made these brilliant movies. It does seem to correlate. There are many, many other stories like this, but I don't think it's necessary.</p><p><strong>[01:09:33] Dan: </strong>Let's talk about music. Do you have any favorite albums of the last 10 years?</p><p><strong>[01:09:38] Nabeel: </strong>Yes. I'm afraid they're mostly cliches, but I think I really like <em>Blonde</em> by Frank Ocean. I really like Lana Del Rey. I think one musician I really admire is Nicholas Jaar. If you watch his-- I think he has a Boiler Room set that is one of the greatest things I've ever seen. He's essentially semi-improvising at the various electronic instruments. That, to me, is a new kind of music that wasn't possible before. It was pretty genius. Yes, I guess I'd say Nicholas Jaar, but otherwise my tastes in contemporary music are pretty normal.</p><p><strong>[01:10:17] Dan: </strong>All right. You've tweeted before about Beethoven's string quartet number 14, and you said it's just like the greatest ever. What do you love about it?</p><p><strong>[01:10:24] Nabeel: </strong>[laughs] Just listen to it. You got to listen to it. 13 to 16, I think are the really great ones for Beethoven. 16 was his last one. He was practically deaf, I believe, when he wrote them. He just does things musically in these that should be impossible. He'll do like a 14-minute fugue, but with a string quartet and it will work. Not only will it work, it will be extremely moving.</p><p>I think he was just operating at the edge of what is expressible in music somehow. You listen to this and you get the sense that music is ending somehow. [chuckles] It's almost breaking apart. You get this similar sense when you read something like <em>The Wasteland</em> by TS Eliot, where you feel like he's pushing poetry to its absolute limits. That's what Beethoven's late quartets feel like to me.</p><p><strong>[01:11:17] Dan: </strong>Great. Great. What's your view on re-imagined or modernized versions of either classical music concerts, opera, Shakespeare, et cetera? Do you think this is a good thing or do you prefer it just to be how it was for the last 100, 200 years?</p><p><strong>[01:11:32] Nabeel: </strong>[chuckles] I'm all for experimentation. I think a lot of the examples I've seen concretely of modern adaptations of Shakespeare, where they try and recast it, have not been good. An example would be they'll take, something like <em>Antony and Cleopatra</em> and instead of setting it in ancient Rome or ancient Egypt, they'll set it in Weimar, Germany, or something like that.</p><p>I find they tend not to work well because usually the director's more interested in making a political point than they are in making good Shakespeare. Often it's with this limited contemporary understanding of politics. They'll use it to make a point about, I don't know, Donald Trump or something. It's just not really what you want to see when you're going to see Shakespeare. I feel like you want to focus on the art.</p><p>I think these things tend to distract. A good counter-example again is the Luhrmann <em>Romeo and Juliet</em>, which I think is set in modern America and it's really good. It is possible.</p><p><strong>[01:12:28] Dan: </strong>Let's talk about New York City. Is it overrated or underrated?</p><p><strong>[01:12:31] Nabeel: </strong>[laughs] I think New York City is great. I love New York City. I don't have a strong take either way, I think it's correctly rated. It is one of the top cities in the world. I will say after traveling around a lot, I took six months to travel last year, there's a lot of, deficiencies of New York City are more visible to me maybe. The subway being maybe not as clean as other cities, things like that. It's a very rich playground for almost anything that you want to do. It's a good place to live.</p><p><strong>[01:13:09] Dan: </strong>What's the most underrated thing to do in New York?</p><p><strong>[01:13:12] Nabeel: </strong>[laughs] I like going to Washington Square Park and playing chess with the hustlers. I think more people should try that if you just know the rules of chess. The reason is they're very colorful characters. They've often been there for decades. They can just tell you stories and they're great fun. Go play chess with the hustlers.</p><p><strong>[01:13:29] Dan: </strong>I find that interesting. You actually have commented before that you cut down on playing chess because it's so addictive. It takes over your brain in ways that you don't like. What do you mean by that?</p><p><strong>[01:13:39] Nabeel: </strong>Chess is one of those games. I think it was Paul, it might've been Paul Morphy who was one of the greatest chess players of all time. He was a lawyer in the 19th century. He beat everybody, but not by a little bit either. He really just wiped the floor with everyone, became the greatest player in the world, and then he quit chess completely and spent the last two decades of his life not playing chess. When asked why, he was just like, "To play chess somewhat well as like the sign of a gentleman, but to be the best at chess is a sign of a wasted life." [laughs]</p><p><strong>[01:14:11] Dan: </strong>Oh, interesting.</p><p><strong>[01:14:13] Nabeel: </strong>That's basically what I think. I do play it probably a little bit more than I would like, but life is about having fun as well.</p><p><strong>[01:14:21] Dan: </strong>What do you think about California? Is that overrated or underrated?</p><p><strong>[01:14:25] Nabeel: </strong>California is probably my favorite place in the world. I think it's not rated enough. People outside of California don't realize the degree to which it is where almost everything in technology is happening. If you even just take a snapshot of the things that have captured the discourse over the last couple of weeks, right, it's like the Apple Vision Pro, California, anything open AI, California. What else?</p><p>I feel like everything you can name has its origins in California and technology. I think if you're in tech and you don't move that for at least a year, you're doing yourself a disservice. Even apart from the tech thing, it's unbelievably beautiful. I think it's the most beautiful place on earth. It has some really rich traditions and other things that interest me as well.</p><p>A lot of interesting poetry came out of California. I'm really into the '60s and '70s, poetry scene there, Gary Snyder, and people like that. It intersected with Zen Buddhism in a really interesting way. A lot of Zen teachers left Japan and moved to California and established Zen centers there. There's a whole strain of California Zen Buddhism that's quite rich as well.</p><p><strong>[01:15:34] Dan: </strong>Okay. Say you were put in charge of SF, what do you think it should fix besides housing and homelessness? Is there, is there anything else that would make it better?</p><p><strong>[01:15:43] Nabeel: </strong>Apart from the two things you named, I do wish there was more to do in the arts. New York is just much more-- SF does have some, it has a great symphony. The SFMOMA is phenomenal. There's a lot of interesting stuff to do, but the problem is you can do it all in a week and you do want more than that. I think if I was an eccentric billionaire and I just had a lot of capital to throw around, I would consider, founding more museums there or creating more rich public infrastructure. I think that's way too limited.</p><p><strong>[01:16:16] Dan: </strong>Now, do you think that the homogeneity of SF being like just tech, not a lot of arts and not a lot of people that are-- maybe there are, but it's very, very focused on tech. Whereas New York, there's all sorts of different industries. People focus on all sorts of different things. Do you think that's part of what makes San Francisco, San Francisco? If it were to absorb some of New York's interestingness outside of tech, that it would lose something that it has, or do you think it could absorb it and stay with the magic that it has in tech?</p><p><strong>[01:16:42] Nabeel: </strong>Right. I think more of both. Right. I do think I agree with, I think it was Noah Smith who said SF should just be like Tokyo. It should be this massive thriving metropolis. I think he's right. I think people there on average might underrate, again, the value of more illegible contributions.</p><p>One discussion I remember having on Twitter a while ago was just why is there so little visible legacy left by tech philanthropists. If you look at someone like Andrew Carnegie or JP Morgan, JP Morgan has the Morgan Library in New York, which people still go to today, beautiful library. Andrew Carnegie built these gorgeous libraries all over the United States. They look like Greek temples. They're really nice.</p><p>I think given the immense amount of wealth in tech, people are right to donate that to causes that people like effective altruists would focus on. I think they do also underrate the value of just having a thriving civic life. Things like museums and libraries and arts and stuff like that is probably underinvested in SF, I would say.</p><p><strong>[01:17:47] Dan: </strong>How important do you think professionalism and clothing style is? New York, notably, you just don't see a lot of hoodies and sweatpants. I feel like in San Francisco, whether it's sweatpants or yoga pants, or just like the athleisure vibe is everywhere. Do you think that that matters at all in any way for the city?</p><p><strong>[01:18:06] Nabeel: </strong>Again, I think this goes down to a view of human life. I think there is something freeing about SF in that, if you're really serious about tech and that's what you're serious about, then that's all you need to focus on. You don't need to worry about what you wear or anything like that.</p><p>I think if you have this view of human life, that's more "well-rounded" then, maybe somewhere like New York or London is a better fit for you.</p><p>I think it's nice that there are options. For the people who do want to just wear shorts and flip-flops, you can go live in the South Bay and it's great. I think the way people dress in LA-- LA is much more fashionable, but a lot of people do just wear athleisure 24/7 and that's fine.</p><p><strong>[01:18:48] Dan: </strong>What's your strategy for finding good places to eat in New York?</p><p><strong>[01:18:51] Nabeel: </strong>[chuckles] I struggle with this. The problem is it's a bit of a pain to get to the good food, basically. There is lots of good food in places like Flushing, for East Asian food or Jackson Heights for things like Pakistani food. To get to outer Queens from Manhattan or Brooklyn can be a little difficult sometimes. I do think it's like find someone who knows those neighborhoods well and just get them to take you to their favorite spots.</p><p>I do think things like Google reviews are pretty low signal in New York. I tend to find that everything just has like a 4.3 to a 4.6 and sometimes the 4.6 is not very good and overpriced. Sometimes actually the 3.8 is amazing. [chuckles] I don't know. I find Google not that helpful. I think it's like tribal knowledge. I ask people I trust, where do they like to eat and go from there.</p><p><strong>[01:19:43] Dan: </strong>What's the most underrated travel experience you've ever had?</p><p><strong>[01:19:47] Nabeel: </strong>Two that I enjoyed recently, one was, Taipei. I loved Taipei. I think more people should go visit Taiwan. It's absolutely incredible. It's this tropical island. It's very lush, it's very green. The people are super friendly and helpful. There's definitely a language barrier. It reminds me of Japan in that respect. People don't speak English. They do speak Mandarin there. It's just a phenomenal place. It's got high-speed rail, it's clean, it's convenient, it's easy to get around. There's a lot to see. I think a lot of people will go to Tokyo, they'll go to Singapore and they'll neglect Taipei, but Taipei is really fun. That's one. Also, the food is world-class. World-class food. You can feed two people for under $10 and you can eat really, really good stuff.</p><p>That's one. I think two is Norway. I really, really, really liked Norway. It's hard to explain exactly what I liked about it, but it's just stunning. When you drive around it, it just feels very old, but also modern in some ways. One thing that struck me a lot is you drive around rural America and you see pickup trucks and gasoline cars. If you drive around rural Norway, I saw a lot of Teslas, but parked at farms, which you just don't expect to see that. You're like, "Why does this farmer have a Tesla Model S?" I saw that a lot in Norway.</p><p>You just see these interesting things and the churches were amazing. They have these, I forget the name of them, but they're constructed all out of wood. Even the nails are made of wood and they look absolutely insane. They look like something out of a video game and it's just an extraordinary country. The people are really nice. Norway and Taipei.</p><p><strong>[01:21:30] Dan: </strong>Do you think it's more stressful to be a teenager or high schooler than it was back when you were in high school?</p><p><strong>[01:21:36] Nabeel: </strong>Well, I don't know. It's hard to say. Certainly, the discourses that there's phones, there's Instagram, there's TikTok, people are comparing themselves to other people more. My intuition says yes. I don't know what the reality is. I don't remember being that stressed out personally in high school. Just the usual things that people struggle with but I felt like I had a lot of spare time to read and explore and so on.</p><p>It does feel like the competitive aspects of life have ratcheted up since then. If you want to get into an Ivy League school, you have to do all these extracurricular activities. You have to be part of debate and all these things, but people can find that stuff enriching and fun too. It's a little bit hard to say on balance.</p><p><strong>[01:22:20] Dan: </strong>Let's talk a little bit about Twitter. You wrote a recent post just talking about how great you think it is. It's given you a lot. You're really good at it. What do you think it is specifically about Twitter that makes it so good for what you call the serendipity machine? Why is it so good at serendipity and helping you meet people whereas Instagram, Snapchat, these other social medias never quite got the hook on it?</p><p><strong>[01:22:44] Nabeel: </strong>Well, I think I'd say first of all, Instagram is good if you're interested in different things. A lot of people in the fashion or the music, or the movie industries, Instagram is their primary thing and they do meet a lot of friends through Instagram. I think that's valid as well.</p><p>Twitter is a place for people who are interested in ideas and it's for idea obsessives, for infovores. The way they use the term autists on Twitter, it's like, it's not really what autism actually is, but it's like its own category of just somebody who keeps up with everything and takes in all these different sources of information and is just obsessed with synthesizing it and joking about it and meme-ing about it. If you're into that kind of thing, it's a very rich playground.</p><p>I've met over 100 people off Twitter and had a good experience in every single case. It's usually somebody that I've been mutuals with for a while. You can tell a lot about a person from their Twitter feed or just looking at their likes and what they like and retweet. It's a really high-dimensional source of data on somebody's personality. I can tell if I just browse someone's feed whether I'm going to get on well with them or not. I think I've probably met you through Twitter and a lot of other people and everyone's really great. Highly recommend.</p><p><strong>[01:23:58] Dan: </strong>Are there any highlights of people that you've met in real life through Twitter, even if you don't have to give names, any good stories?</p><p><strong>[01:24:05] Nabeel: </strong>Yes. I'd have to think about this one for a little bit. Let's just say I've been invited to a lot of billionaires' houses. [laughs] That would never have happened if I wasn't tweeting. You land yourself in a lot of surreal situations. I think one interesting example of dark matter is just like, there's a lot of private conferences that occur and they tend to be, "We'll invite 10 people who tend to be active on Twitter and also are specialists in a particular area." I think if you don't have this public presence, you can miss out on a lot of this stuff. I think there is a lot of value in doing that.</p><p><strong>[01:24:48] Dan: </strong>How do you not get addicted to it? What's the right balance between creation versus consumption?</p><p><strong>[01:24:52] Nabeel: </strong>I think there's a constant balancing act. I think you and I both know. What works is very individual for people. Some people set screen time limits, some people don't have the mobile app installed, they only use it through desktop. I find I'm okay with it. For me, what I do when I'm at home is I put my phone in a drawer and I can only use it if I go to that drawer, open the drawer, and stand there. I can't take it around the house. I can't lie in bed with my phone. Because then what happens is you go into a black hole and you scroll for two hours. I think as long as you minimize that scrolling, it's fine.</p><p>I do think there's a lot of basic mental health stuff that you need in place to do well on Twitter. You see this. A lot of people become prominent, they spend all their time arguing with their followers. They get sucked into this public persona and then they go a little bit crazy and sometimes they leave the platform or they just turn into this character that they weren't to begin with. I don't want to name names, but it happens to a lot of people.</p><p>I think there's an element of being able to take when someone dunks on you or dislikes you and is public about it, is one thing. That gets to a lot of people. Just generally not taking it too seriously. Part of that is realizing that people are meaner online than they are in real life. A lot of people who are dunking on you on the internet, if you met them, you'd probably have a great chat and you'd probably be friendly. Don't take it too seriously. [laughs]</p><p><strong>[01:26:11] Dan: </strong>What do you think are some of the recent product changes that Twitter has made? They've nuked links that you can't really actually post a link anymore. It just shows up as an image and then it's also going to not promote you as much if you have links in. You can now post an entire paragraph or blog post in one tweet. What do you think of these things?</p><p><strong>[01:26:29] Nabeel: </strong>I've been more positive on Elon Twitter than most people I know. I think long tweets were a really good idea. A lot of people said that it was ugly at the time, but I think basically for me, the argument is there's a lot of posts that never get posted if long tweets don't exist. There's a lot of things that aren't quite worth writing a blog post about, but you can't be bothered making a thread about them.</p><p>A trivial example but Andrej Karpathy, the ML researcher recently wrote his review of the Vision Pro as a long tweet and it's great. I don't think he'd have blogged that necessarily, but like long tweets enable all these things to exist that wouldn't otherwise have existed. I think that's good. The one thing I'm unequivocally not a fan of is nuking links. I think that has directly decreased the utility of the platform for me because I use it to find interesting things to read.</p><p>Now it's a little bit more about current events and things that are happening live in the world, but it used to be that people would publish, they still do, but they have to resort to all these hacks now. They have to post a screenshot of it or something and then you have to dig around and the replies to find the link. It's just annoying. I really wish they revert the links thing. I think the Substack thing is petty and they should not deboost posts with Substack links because it's actually really hard to find somebody worth reading on Substack. You have to dig through.</p><p>I think that decreased the utility of Twitter for me but in general, I'm pleased that they are shipping more. I think the platform is actually improving. It's still a great place to be. A lot of the people who whined about it and left have come back since. Overall, good.</p><p><strong>[01:28:07] Dan: </strong>Do you think that Farcaster, one of these competitors, has a shot at taking it over as a serendipity machine?</p><p><strong>[01:28:12] Nabeel: </strong>Hopefully, we'll just see a lot of competing ecosystems. I don't think any of these is necessarily going to kill Twitter. I've been on Farcaster, I've been on Bluesky. I think that Farcaster has definitely siphoned off a lot of high-quality discourse around crypto. For that reason, I'm not super active on it because, I have nothing against crypto, but I'm not that into the ecosystem either. You can go on it and people like Vitalik or Venkatesh Rao are posting really interesting thoughts there. It is a rival ecosystem and I hope it does well.</p><p>Bluesky, I'm really not sure about. I think I was really, really a big fan of the decentralized ethos of it. A worrying property of Twitter that it's a single point of failure for discourse. Right now you have Elon who owns it, but theoretically, if somebody less oriented towards free speech owned it, or if the government just decided that for whatever reason they wanted to restrict discourse on it, a lot of discourse would die as a result.</p><p>I think it's an appealing aspect of Bluesky's design, at least in theory, that you can't really shut it down because it's based on these multiple nodes, a little bit like BitTorrent or Bitcoin. I do think decentralization is a good property for social media to have in the future, and I want more experimentation along those lines.</p><p><strong>[01:29:32] Dan: </strong>If you were head of product at Twitter, what's the top change you would make other than bringing links back?</p><p><strong>[01:29:38] Nabeel: </strong>Oh my God, there's so many. A couple of things that come to mind, one is I would love to be able to tune my feed based on what I want to see more of and what I want to see less of in a better way. I think you can do this with LLMs pretty easily now. What I mean by that is I see a thoughtful tweet, maybe it's a discussion of a quote from a book or something like that, I want to say to my feed "Show me more of these." Conversely, when I see some random video of someone getting punched in the face or whatever, I really want to see less of that. I find that just clicking on the three dots and saying, "Show me less of this, doesn't really work very well." I think better topic tuning is one.</p><p>I think the other thing, I wish for is they introduced these bookmark folders and there's this bookmark feature that's first-class feature on a tweet now, but you can't really do much with it. Categorizing bookmarks into folders is a pain and it's impossible to find bookmarks that you did months ago.</p><p>I don't think you can search them. Maybe you can, but it's hard to dig through. I think the Twitter's tool for thought kind of thing is undervalued. There's a lot of old content on Twitter that is just buried under the sands of time and Twitter in general can do a much better job at resurfacing that. It's so rich.</p><p><strong>[01:30:56] Dan: </strong>Let's talk a little bit and shift over towards AI. You spent a good chunk of time recently in San Francisco and as I understand it met with a lot of folks working at the major AI labs. I'm curious what's your sense for how critical it is to be in SF? You got to this a little bit earlier when we talked about San Francisco and New York, but if you're specifically working in AI, how critical is it to be there?</p><p><strong>[01:31:16] Nabeel: </strong>I think it's very important to be in SF and AI. It is really the center of it. To make it concrete it's like where are most of these innovations actually coming from. They're coming from OpenAI and Anthropic. They are both based in San Francisco. Most of the AI researchers are therefore based in San Francisco.</p><p>If you're not there, you are not getting access to what's being talked about, that's not on Twitter, not on the internet. It's illegible. If you're not at those discussions, you're going to be missing out on the bleeding edge. Someone said something pretty depressing. I think it was on one of the podcasts where they said, "A lot of what gets published anymore is not any good and all the good stuff does not get published because it's a trade secret now."</p><p>If you're OpenAI and you figured out how to make AGI, you're not going to publish that. Yes, I think the value of this dark matter is important and I think paradoxically, the agglomeration effect of being at SF is even more important. I definitely think people should go there. With that being said, there's plenty of successful AI companies not in SF too. I do think AI is going to be this very broad-based movement. MISRA was probably the most leading open-source effort outside of Meta and they are based in Paris.</p><p><strong>[01:32:29] Dan: </strong>What's your sense for the ratio of people out there who are worried that that world is going to end doomers or versus just straight-up optimists? It's hard to tell if you're just on the internet because there's just a couple of loud people on both sides, but what do you think it actually is?</p><p><strong>[01:32:43] Nabeel: </strong>I think that it gives you a very misleading picture of this. Basically, most people are very uncertain. When you talk to people, I think the only thing that everyone agrees on is that it's probably going to be a big deal. Everyone's reaction is like, "Oh my God, AI. Ah. [laughs] Everything's going to change." I think everyone agrees on that. I think some people are more doomery.</p><p>Some people maybe don't see what the doomers are talking about. I would still say that's the majority of people. I do think doomerism, it's a slightly pejorative label, so maybe we shouldn't call it that, but people who are worried about X risk, I think it's still a minority actually. I think if you talk to the average AI engineer, my take would still be that they are like, "What's the fuss?"</p><p>That's why they're building it. You have GPUs going burr all over the place in San Francisco and new models are coming out. They're building it because they want to build it. They see it as inevitable; they want to be the ones to make it happen and they think that we have a shot at making it safely. I do think that's the median opinion.</p><p>I think the thing that gets a lot of online attention, and clicks are these two extreme positions of like AI has a 99% chance of killing us all. I didn't quite see the concrete argument there and I think that's probably what most people think of that. On the other hand, you have EAC, which is like "Accelerate and don't worry about safety at all," and that's probably a little bit crazy as well. I think it's like life. Most people are moderates, they're in the middle, they can see both sides, but the two polls get the most attention.</p><p><strong>[01:34:19] Dan: </strong>Of folks you talked with in the industry. What do you think the average AGI timeline is?</p><p><strong>[01:34:22] Nabeel: </strong>[laughs] I don't know about the average, but I have a very biased sample from going to parties full of AI researchers. For them, they think it's likely going to be here in the 2020s. I would guess, most people probably think the median is somewhere between 2035 and 2040. There's a lot of people with much more aggressive timelines than that.</p><p>I don't know how serious people are when they say this, but a lot of people will say like, "Oh yes, two years." I don't believe that. It's very uncertain. People forget that-- I don't know the exact dates but GPT-4, so it's what? February 2024 right now. That thing probably finished training in the summer of 2022. That's over a year and a half ago now.</p><p>Whatever is cooking in there is probably much more powerful. There's a bunch of big unknowns because you have LLMs, fine, but what happens if you actually get synthetic data and self-play working? Can you then bootstrap them much faster? Then what of all the attempts to combine LLMs with search, it's unclear because these things aren't public. Generally, I think most people are pretty bullish in SF on AI within this decade and they see the rest of the world as somewhat oblivious to this fact still.</p><p><strong>[01:35:45] Dan: </strong>What's your timeline?</p><p><strong>[01:35:47] Nabeel: </strong>My timelines are relatively short I would say, in the sense that I would be pretty surprised if we don't have something that everyone can call AGI with a straight face by 2035.</p><p><strong>[01:35:58] Dan: </strong>Shifting over to more mundane utility just like today, what's the most interesting use that you get out of LLMs?</p><p><strong>[01:36:05] Nabeel: </strong>I use ChatGPT a lot. I rely on it pretty heavily when I'm doing programming tasks just because I do think it's a good programmer actually. You really have to know what you're doing to use it properly because it will do really stupid stuff, or it'll just write really ugly code. I think you do have to know what you're doing, but there's a lot of stuff in programming that's very tedious and, you can just ask the LLM to do it and it'll do a pretty good job.</p><p>There's a lot of stuff that's harder for humans that is pretty easy for LLMs, and you can just ask it to do it to minimize your own cognitive efforts. If you know that there's an efficient way to do this query and it's some complicated algorithm, you can just say, "Hey, LLM, I think you have to use this algorithm to solve this. Can you just write that in Python for me?" It'll do it in less than one minute. It would probably take me an hour to do this properly. That's a huge time-saving.</p><p>I think number one is programming. I find it very helpful to brainstorm on for things like essays or books that I'm reading. We talked earlier about deep reading things like <em>The Iliad,</em> I actually found it fun to talk to GPT-4 about <em>The Iliad.</em> I would put in some Greek text, and I would just be like, "Hey, can you show me the different ways translators have rendered this?"</p><p>Sometimes it would do a pretty good job, or I would just dig into, "What does this word actually mean in ancient Greek? What are the connotations of it?" It's really, really good at that. I think for deep read reading it's maybe a little bit underrated. It knows a lot about a lot of random things, but you have to be very good at eliciting that knowledge.</p><p>Other people say that things that I haven't yet myself cracked, but that other people like for it. I think one is for writing, I still don't find it that useful. I really dislike the writing style it comes up with and I haven't even found it to give super useful comments on drafts actually or at least nothing that I couldn't have figured out myself. That's one.</p><p>I think two is, a lot of people find it useful for therapy or therapy-adjacent uses. They'll put their journals into it and go, "What can you tell about me?" I don't know. I don't find that super compelling yet, but I'm bullish. I think people are exploring all the uses of it right now. I think especially like-- the thing I'm maybe most excited for just for it is like the number of tedious tasks that humans do in the world is so high and I think automating all those workflows is actually a moral imperative. [chuckles]</p><p>I think it's a huge waste of human brain power to be doing a lot of these things. The sooner we can automate this stuff, the better. I think most people maybe view that as a little scary, like, "Oh, we're automating everything" No, I think we should do that as quickly as we can.</p><p><strong>[01:38:39] Dan: </strong>I ask this question a lot, but if we-- You just take GPT-4 current days models, and we were to just deploy it throughout the economy. How many of the use cases do you think we found versus in five years we'll be like, "Oh my God, I can't believe it took us so long to realize it was useful for X."</p><p><strong>[01:38:54] Nabeel: </strong>That's a really good question. My sense is that we're still just scratching the surface. People are still figuring out how to use this thing. You probably remember when you first started using Google. I think now our use of Google is way more sophisticated than it was back then. I think GPT-4 is like that except its way more high dimensional [chuckles] than a search box.</p><p>The uses will be endless. I think a lot of what's going to happen though over the next 5 to 10 years is just that actually deploying things into real-world settings is very, very complicated for reasons that have nothing to do with AI. There are things like regulation, there's things like user experience, training, all these things. A simple example would be medical use cases.</p><p>The amount of time that doctors waste on taking notes and writing up notes and generally sitting in front of a computer, clicking things in EHR software is insane. It's a huge black pill. [chuckles] I think just being able to automate that for them so they can spend a bit more time caring for patients would be great. Now, is the capability there technically? Yes. Has anyone done this for the whole economy yet? No. I think that's what a lot of the next 5 to 10 years will look like.</p><p><strong>[01:40:10] Dan: </strong>One thing that you've talked a little bit about is just back in the low interest rate environment, it's felt like FAANG engineering. It's like tech in general. These jobs were unrealistic. You have total comp that is really, really high, people not necessarily working a ton of hours. I'm curious, do you think that we're going to find a new equilibrium? Do you think AI will change the software engineer as this really high status, high paid job?</p><p><strong>[01:40:35] Nabeel: </strong>I actually think the opposite here. I think it's going to increase it. I think the comp in that industry is going to go even higher and software engineers are going to get better and better paid. Maybe this is going to cause some social problems. You see it now. If you look at the stock market right now, Nvidia has gone completely insane. It's up some ridiculous percentage. I think it's worth $1.7 trillion right now. The entire Chinese stock market is worth about $8 trillion. Nvidia is basically eating the economy. I think right now we're in an era where being able to use AI makes you as a software engineer more productive if you know how to use it correctly.</p><p>It actually improves productivity for a lot of engineers. I think that should increase comp for engineers. I think what you will see though is that you can have companies that are run by fewer people as a result. One engineer can maybe do the job of two now. It's not quite that easy, but that's the direction that I think AI will go in where you can have smaller companies. Everybody is extremely well compensated though because the returns to automating the world, and the APIs of everything, and automating everything are still so high. What you'll see is increasing returns to software, I think, over the next decade.</p><p><strong>[01:41:55] Dan: </strong>How bullish are you on the economy just generally? Should you go lever up and just dump it at the S&amp;P?</p><p><strong>[01:42:01] Nabeel: </strong>I'm mega bullish. Most of what I have is in the S&amp;P. I had this dumb tweet go viral a long time ago that was about the Roaring Twenties, and I think that was true. I remember tweeting that during the pandemic and everyone was like, "Ah, we're in a pandemic. Can't you see that?" I think that was correct. I'm mega bullish. I think what we're going to see is high total factor productivity. We're clearly seeing revolutions along multiple dimensions right now. I think LLMs are exciting because in any corporate setting, you can deploy them in some useful way.</p><p>People are still figuring out what that is for a lot of industries. I've had customer service delivered by LLMs, and it was fast and it was great. This is already happening. I think things like this are going to cause massive productivity gains. How the employment picture shakes out is anyone's guess, but I'm generally pretty optimistic about the US economy. Where it gets trickier is in other places.</p><p>I'm from the UK, the UK is having some tough economic times. I think the question there is just, will they allow themselves to reap these productivity gains? Because I think over there, what you have is the governance and the local, all the rules getting in the way of them getting anything done. A lot of economies are going to surge ahead. A lot of economies that were previously great might get left behind.</p><p><strong>[01:43:33] Dan: </strong>Let's talk a little bit about this post you wrote on puzzles and problem-solving. I really liked this one. You compare the active traits of someone who's really good at solving, say chess puzzle or just becoming a great founder or something, and the differences between different types of problems. There's one concept I was wondering if you could just explain, which is the difference between what you call science brain and founder brain.</p><p><strong>[01:43:55] Nabeel: </strong>I read this a while ago. The way I saw it was basically that I think founders have to be maybe a little bit more delusional about what they're doing, whereas I think in science brain, you're constantly trying to refute everything and really figure out what is true. You're very skeptical of every notion by default, unless you can prove it. This is actually very paralyzing in early stage startups. I think frequently in early stage startups, you're faced with a scenario where you have a thing, it works, but it's not growing it 100% a month or whatever. You have to figure out what to do with it.</p><p>You can come up with a lot of ideas. Then it's very, very easy for a skeptic to shoot down every single one of those ideas. You could say, "Okay, well, let's build AI agents." "Oh, no, there's 100 other startups doing that. Why would we be the ones to succeed? Think like a Bayesian." Then you can do that for every single idea that you can possibly come up with. The net result of that is you try to do nothing because you refuted every single idea you came up with.</p><p>I think in that way, the science brain can work against founder brain. Founders do have to be optimistic in the sense that they will try stuff that has a low likelihood of success. They will try lots of things very, very fast. Eventually if they persist, something will work. I think the scientists and the engineering mind is often not amazing at this because they're too good at spotting the flaws in everything.</p><p><strong>[01:45:22] Dan: </strong>I'm curious on problem-solving, there's a meme on Reddit. This is actually something I found is more of a chess newbie, you might say, where they'll always show classic new player on Chess.com, and their puzzle score is really good. Their end game score is terrible, they just can't win the game. I find this experience too, where I can play the puzzles because I know there's an answer. I can sit there and stare at it. I improve, but then I get in the game and I almost feel like I never know what the answer is. Do you feel any translating of this to your framework for problem-solving and how you relate it to founders?</p><p><strong>[01:45:58] Nabeel: </strong>The internet author, Goran, has one of my favorite blog posts of all time. It's all around this theme on how if you know that there is an answer to something, the thing becomes radically easier. You see this effect all the time. The classic example of the four-minute mile, as soon as Bannister ran the four-minute mile, suddenly a lot of people did it right after him because suddenly everyone was like, "Oh, if this is possible, maybe I can do it too." Then there's a famous among scientists story of Claude Shannon, the information theorist, where I think he was trying to solve a puzzle.</p><p>Then somebody came by and said, "I could tell you something." That was all he needed, and then he solved it. Again, it was a similar thing where someone just giving you a hint that something is possible at the right time. Garry Kasparov said something very famous about chess cheating, which is, he said that you need a very small amount of information to cheat very effectively at chess. In fact, you only need one bit, which is if you're in a critical position, sometimes you don't know it. You might've experienced this. You're in a position you don't know that there's a tactic that could win you the game.</p><p>If someone just buzzes at the right moment, maybe they flash a light or something, that tells you you're in a critical position, suddenly you find the tactic.</p><p><strong>[01:47:10] Dan: </strong>How interesting.</p><p><strong>[01:47:11] Nabeel: </strong>A lot of games are not won because somebody didn't realize there was a tactic in the position. This translates in all kinds of ways, but I think fundamentally, people will get very good at problem-solving in the artificial environment of solving puzzles, but then the real world is a much more wicked environment. You don't necessarily know what framework to apply to the situation in front of you. That's the whole difficulty.</p><p><strong>[01:47:40] Dan: </strong>Actually talking about this idea of iterating and the founder versus science brain, you know in your post basically that founders "you know when something isn't working way before you're actually able to admit it to yourself. It's best to save yourself time by admitting that it's not working sooner. Cutting your loss is iterating quicker." This gets to how you answer the first question. You want the cycle time to be as fast as possible. The more cycles, the higher your chances of success overall.</p><p>Now, I'm curious how to balance this because you get some conflicting startup advice sometimes. In Peter Thiel's Palantir principles, he notes that you like what really want to avoid diverting your attention too often because applied effort often has a convex output curve. Basically what he's saying here is, don't get distracted by shiny things, but yet you want to iterate quickly. What's the right balance here? Do you have any insight or intuition on how to balance this?</p><p><strong>[01:48:28] Nabeel: </strong>This is the classic explore-exploit problem. It's a huge issue in startups in general because on the one hand, you have this lean startup mythos, which is try a bunch of stuff, throw it against the wall, see what sticks. Then Peter Thiel likes to set up the other opposition of, no, you must have a vision of how the world must be. You must stick to it. Forget the feedback, just go and press on and make that vision real. I think the reality is always some combination of the two. I think, again, it's a case where these dichotomies were a little bit misleading.</p><p>I think it is very, very important to have that vision of the world, but then be very flexible about how it should be achieved. A very good example is Peter's own company, PayPal. I think they initially were wiring money between PalmPilots. What they actually realized was taking off was they were helping people sell Beanie Babies on eBay. They leaned into becoming the payment processor for eBay sellers. That's how they got traction. The original vision for PayPal was actually a world currency much like Bitcoin is today.</p><p>Clearly, it didn't end up becoming that. I do think there's this element of you have to be flexible and willing to pivot on a dime. At the same time, it's hard to argue with something like SpaceX. The whole reason they're going to Mars, it's not for economic reasons. Elon's a genius at this. He'll probably figure out a way to make money from it eventually. They're doing it fundamentally because he has this vision that humans should be multi-planetary. I think ultimately, if you do want to do something that makes a dent, then you have to have that vision. I think if you just take the iteration thing too seriously, what a lot of people end up with is incremental software. You'll end up maybe building a pretty good B2B SaaS company or something like that, and it will be economically valuable. Don't get me wrong, but it won't be that SpaceX going to Mars visionary thing.</p><p>Another quick example is OpenAI. They're all the rage now, but from, I don't know, 2016 onwards, they were just trying lots of random stuff in AI. They just looked like a bunch of weirdos in a garage. People forget this part of the story. I think it took them having this vision that machine intelligence's time had come for them to stick with it long enough such that one of their bets on large language models actually did work out in the end.</p><p><strong>[01:50:53] Dan: </strong>We've talked a little bit about books, and film, and all sorts of different types of activities that some people might just ascribe as entertainment. I'm curious, how do you distinguish between the two if you want to really go and learn something versus you're just entertaining yourself? When you're watching a movie, do you have in your head like, "Oh, I'm learning versus I'm just unwinding and entertaining," or what is the difference between these two things in your mind?</p><p><strong>[01:51:18] Nabeel: </strong>[chuckles]That's such a great question. It's really deep. I actually want to write an essay about this. I think a couple of things. One is I think a lot of forms of what seem like learning are actually entertainment. I think the reason for that is to really learn something in a way that sticks, you basically have to derive it for yourself in your own schema. Often the best way to do that is to sit down with a blank sheet of paper and maybe a textbook, and then eventually be able to derive everything yourself. Almost nobody does this because it's really, really hard.</p><p>The result is that most people walk around without explicit models of the world in their mind. I think that what happens in practice is people will listen to content that's quite engaging. A simple example is you want to know more about economics, you'll listen to something like the <em>Freakonomics</em> podcast, which is very good. You will learn some bits about econ, but you can listen to 100 episodes of <em>Freakonomics</em> and not walk away with a clear model of the economy in your brain. I think we need to distinguish between these activities of taking in bits of information that don't necessarily fit into a schema versus creating the schema in your brain.</p><p>I think what that schema tends to look is a model. You can tell because the people who have done this and have done this high level of first-principles thinking tend to have very distinctive opinions. Tyler Cowen is one of them. He's always asking you, "Okay, what is your model of this situation?" It's always this jarring question because you're suddenly like, "Oh, I don't really have a good model of this." I was lucky because I had this economics tutor at uni who, we just used to do imitations of him because he would always interrupt us when we were saying anything and be like, "What is your model?"</p><p>We just got very good at thinking in that way. I think people need to construct their own models. If they don't do that, then it's hard to argue that something's learning. The other thing I would say is you asked about art, I think art sits in this boundary case where a lot of it is just entertainment, but some of it is less entertainment and more experience. You can learn a lot from experience. When you travel to a country, you learn a lot. That's not just reducible to explicit statements. You learn what it is like to be in the place. If you have, let's say a romance that doesn't work out, you learn a lot from that experience. Sometimes in ways you can't really summarize, but you feel like you got wiser.</p><p>I think a lot of art sits in that category where you come away from it with this highly compressed experience of something. You can't say exactly what it is. Think about like <em>Hamlet</em>, think about <em>King Lear</em>. You watch <em>King Lear</em>, he's being a fool. He promotes his two sociopath daughters and neglects the one who's really good. The fool makes fun of him and he ignores him the whole time. You can't just boil it down to, okay, make sure you listen to dissenters. You could, but you'd be wrong. You go through this experience when you watch <em>King Lear</em> that is valuable at the end of it. I think it's inaccurate to say that's just entertainment. It's more in the category of a memory that you go back to a lot.</p><p><strong>[01:54:44] Dan: </strong>What should one do on a sabbatical? How do when it's time to take one?</p><p><strong>[01:54:49] Nabeel: </strong>I think most people don't do this enough. Sam Altman has this great line. I think it's spend five years working on a thing, one year orienting, or something like that. I think most people will neglect the one year orienting. I think a lot of people don't spend enough time making sure that what they're doing is the thing they're meant to be doing with their lives. It's too easy to stay one more year because the money's good or whatever it is. Everyone has very valid reasons for doing this. To be clear, life is expensive. You have to have money in order to do things like buy houses, and have children, and so on.</p><p>I think if you can afford it, you should definitely take a sabbatical. I think that a simple framework for it is the hill-climbing problem from computer science. Often you'll climb a hill and it'll just be the local hill. There are bigger hills that you could be climbing elsewhere. I think the sabbatical is the randomization device that plucks you off the local hill, puts you in a strange part of the territory. Now, you're in the middle of the forest again and you have to explore. I think that's very healthy and stimulating. I think you'll end up trying a lot of stuff.</p><p>I think one thing that's interesting that you learn that is maybe a little hard-won knowledge for me is you cannot just sit there with a blank Google Doc and figure out what you want. You cannot introspect and out of that come a list of things I want. You actually have to try a lot of different things and do a lot of different things. Then some things will stick and some things don't. Those are the things that you do on.</p><p>Some things will just resonate with you more, but you have to figure this out empirically. It's not knowledge that you can introspect or just dredge up, I think. There are some things that do help. For example, figuring out what you did naturally as a teenager, that's actually a very good signal. I think sabbaticals are great because you get to try a lot of stuff that you just don't have time to do when you have a day job.</p><p><strong>[01:56:39] Dan: </strong>If you currently have a day job, do you think that you should have some sense of what it is you want to spend the time on? Do you develop that? Is that part of the sabbatical, is developing like, "Oh, I'll go figure out what I'm going to try?"</p><p><strong>[01:56:50] Nabeel: </strong>I think it's one of those things. I think having a plan is always good, but you often end up throwing the plan away a couple of months in and that's fine. I think the planning is helpful. I would say for myself, I didn't have a super defined sense. I had a few things I wanted to try out. I knew I wanted to travel for an extended period of time. I wanted to go spend more time with my family. I think one of the things I found a little depressing about full-time work was just, my family's abroad, so I'll see them for one week at a time, which is how much leave you can get. Then you have to go back.</p><p>You want those deeper moments where you spend more like a month with them or something. Just being able to do that was valuable. Then I think there were things like, for example, I had constructed these reading lists for myself. I had a bunch of classics that I'd missed over the years. The <em>Iliad</em>, <em>Moby-Dick</em>, <em>Middlemarch</em>, things like that. I read all of those. I had a reading list on the history of technology, which I'm still working through, but it contained a lot of critiques of technology. Things like Ivan Illich, and Neil Postman, and things that we talked about.</p><p>I wanted to immerse myself in that perspective because I've always come at things from a techno-optimist lens and I wanted to really challenge my own views. I think as long as you have sets of these activities that you want to do and you have a plan for yourself economically and so on, then it can be a really good thing to do. I think the thing I would strongly urge anyone who does this is don't just sit in your room and scroll your devices all the time because I see a lot of people do this.</p><p>They do nothing all day, they don't engage in enough activities. I think just get involved with any community you can. Twitter is great for this, but find a lot of people, have coffees with them, do what it takes, but get active in some way and try a lot of different stuff. Then you'll learn more about yourself through that.</p><p><strong>[01:58:44] Dan: </strong>You have a page of principles on your website, your personal page. The number one is figure out what makes you imbalanced as a personality. What makes you imbalanced?</p><p><strong>[01:58:53] Nabeel: </strong>I'm an extreme infovore. I think I just read everything. I read pretty fast. I watch a lot of stuff. I'm quite wide. I think a lot of people are pretty relatively narrow in one area maybe, and then wide in some areas. I feel like I just do well when I go deep serially on things. I went very deep on film. I went very deep on various aspects of tech over the years. I feel like being an extreme infovore and related to that, being maybe an extreme generalist is, I think, what makes me quite different from other people.</p><p>With all my good friends, for example, they get a little tired when they read too many links, or browse Twitter too much, or read too many blog posts or too many books in one day, I never get tired of that. I can just do it all day. It's actually, I have to like develop guards to not do this too much because it ends up being a waste of time, but I can just take in information all day. I think that's my imbalance.</p><p><strong>[01:59:55] Dan: </strong>What's the most impactful cold email you've ever sent?</p><p><strong>[01:59:59] Nabeel: </strong>The one that comes to mind is, I cold emailed Tyler a few years back and we ended up becoming good friends. I don't think I've told this story, but I emailed him a work of fiction that I made at the time, which I didn't end up doing anything with. It was a stub of a novel, which I ended up completing, but I decided I didn't want to publish it. I sent him the first 30 pages. I was just like, "Can you just tell me if this is any good?" [laughs] I didn't expect a response, but I'd looked up to him for a long time, and always blog. Tyler being a magician, within a day, he comes back with a bunch of comments. He's like, "I think it's great." It meant a lot to me.</p><p>I think that was a simple example where I didn't expect him to read it, but he did. The encouragement was really valuable to me at the time. I think more generally, it's very important for people to get this out of the way as soon as possible. Cold emailing people who you think are too important, or too famous, or too cool to speak to you, whatever it is, just do a lot of that as soon as you can. Do it in college, do it in your early 20s.</p><p>You learn very fast. It sounds very dumb, but you just learn really quickly that these are normal people. They have things that make them imbalanced too, and that's why they're successful. These are people who are in a room and sometimes they're confused about what to do next. They're just like you in all these relevant ways. When you see that, you realize that you too can do something great. I think it's like a huge leveler up of your ambition.</p><p><strong>[02:01:33] Dan: </strong>Have you ever gotten anything out of meditating?</p><p><strong>[02:01:37] Nabeel: </strong>Yes. I think meditating is an incredible thing to dive deep into for a while. I'm not a regular with it, but I think you'll learn a lot of important truths from it that are very hard to pick up otherwise. A really simple example is gratitude to pain or discomfort. When you're meditating, everybody has this experience where you sit still for a while and then your knee starts hurting and whatever. What meditation teachers tell you generally is to push through that and play with the sensation of pain. Then you realize this thing early on, which is that the sensation of pain can be separated into two parts.</p><p>There's the physical sensation and then there's your mental reaction to it. Actually, the mental reaction as well within your control. Then you have this very surreal experience sometimes where there's types of pain where you should get medical attention, but we're not talking about those. We're talking about things where you just get discomfort from sitting for a while. You realize you can make the mental sensation actually change and relate to the pain in a different way where it's not that bad. Eventually, you stop feeling it entirely. That's a really interesting lesson.</p><p>You actually learn this from running too. I run a lot and sometimes, I don't know, something will hurt for a while. Then if you just run through it, it will go away. Sometimes it gets worse, then you should stop. You learn that there's this weird subjectivity to a lot of mental sensations, and the sooner you learn that the better. David Goggins, who's the self-help, running military guy, I think he has this rule that when you think you've really got nothing else left to give in you, you actually can go for another 40% or something like that. I think that's true. You learn this.</p><p>I used to run marathons and there's a moment at mile 20 when you're just like, "I cannot go on. I physically have to stop," and you realize you can go on. I think learning this radical subjectivity thing, which I think in Buddhism, they call it emptiness, when you really internalize this, it helps you a lot in your day-to-day life because you'll get faced with stressful situations at work, or in your startup, or whatever it is, or somebody tongues on you online, or something. Being able to just relate to that pain in a different way is one lesson. There are many others, but everyone should try it, I think.</p><p><strong>[02:03:59] Dan: </strong>What do you worry about regretting late life?</p><p><strong>[02:04:04] Nabeel: </strong>I worry about not doing enough and not fulfilling my potential. I think that's the short answer. I also think that your view of life changes as it goes on. I'm sure that by the time you get to the end of your life, what you will find valuable about your own life will change. What I mean by that is I think when people are young, they have this strong desire to make a legacy, and leave something behind, and all this thing.</p><p>It seems that a lot of people get older and they realize that's not that important because actually in 20 years, probably people aren't really going to remember you anyway unless you really did something significant, which is 0.001% of the population. What really matters maybe is the family you had, or the connections you made, or whatever it is. I do worry about not doing enough, but I also try and hold that lightly because I do think your attitude to this changes.</p><p><strong>[02:05:06] Dan: </strong>Now, you tweeted about this before. It does seem like there is an interesting number of people who achieved a lot in their life. Steve jobs is probably the most salient one to me, where he would talk all the time about knowing that he's going to die and reminding about it. Do you think that's actually a healthy thing to do?</p><p><strong>[02:05:22] Nabeel: </strong>I think so. Definitely some people can be more morbid about it, but I think people don't have enough consciousness of the fact that life is limited. One of my favorite essays, it's such a clich&#233;, but the <em>Seneca Essay</em> about the shortness of life, I think everyone needs to read that because he says in all these very vivid ways that if you just accounted for all the time that you wasted, you would be horrified.</p><p>You can always do more, that's rule number one. It's just, you can always do more. You and I talked before this about starting a podcast and things like that. The actual amount of time that people actually take to do things that are important to them is quite low, but we spend a lot of time in the in-between place. I think we can all just move a little bit faster and ship a little bit more.</p><p><strong>[02:06:14] Dan: </strong>For productivity, shifting gears just a little bit here, do you get value from Roam, Notion, these extra SAS software apps in your life or are you more of an Apple Notes guy?</p><p><strong>[02:06:27] Nabeel: </strong>[laughter] I am more of an Apple Notes and Google Docs guy. I was really into things like Roam and Obsidian for a while. I think my view in this changed with things like dbt, which is basically, I think a lot of those tools encourage you to form explicit schemas for your own knowledge and categorize things and tag things in certain ways. Actually because we have models that understand human language now, what it's going to look like is that you have an intelligent AI assistant. It will find you whatever you need to find it or make whatever connections you want it to make.</p><p>The highest alpha that you will have is just browsing through your old notes and maybe it resurfaces those for you in some way. Now, I think it's just going to be very unstructured, which is pretty much what Apple Notes is. AI is going to make up the difference. I think some people are very, very structured. I know a lot of people who are power users of Notion, for example, and it works well for them.</p><p>I found for myself that the effort it takes to set up those systems is not actually worth it for me relative to the gains I get. I think my memory is pretty good for things like conversations I have, things I read, things I watch, whatever it is. Maybe that's part of it. For me, it's minimalist Apple Notes, Google Docs. That's it.</p><p><strong>[02:07:45] Dan: </strong>I took that same journey. I was using Obsidian a couple of years ago. Recently, I've just slowly gotten way back into maximum Apple. Let's talk a little bit about your experience working in tech. You joined GoCardless, I found this interesting. As I understand it anyways, you were employee number eight, and you were pretty early in your career. You ran BD and sales for them, closed their first 10 enterprise customers. That's a pretty intense environment to be in, to be at a small startup, closing big customers, and being responsible for revenue at a really early age in your career. How did you ramp up with presumably little background in sales?</p><p><strong>[02:08:24] Nabeel: </strong>I was really young. I think I was 21 or 22. I'd graduated. I'd done a few months in consulting and I hated it so much. I remember reading a Paul Graham essay about startups and being like, "I want to do that." Unfortunately I was in London and there wasn't that many startups at the time. I remember looking up UK startups that had gone to Y Combinator and GoCardless was one of the only two. The other one was Songkick, which was also successful. Anyway, I ended up joining GoCardless because I was very, very impressed with the founders. They were very energetic, impressive, brilliant people.</p><p>They're all super successful today. Tom is one of them, Tom Blomfield. He went on to found Monzo, which was another billion-dollar company. He's now a partner at YC. I think it was a really good decision. It was really funny because what happened was I interviewed and they were sufficiently impressed with me that they gave me an offer, but they said, "You're the only business guy that we're hiring because I didn't study programming in undergrad, so we don't want business guys around here."</p><p>What they did was they gave me a Ruby on Rails book. They were just like, "Take four weeks to read through this and then come back when you can program." I was like, "Okay." Luckily I'd been programming as a teenager, so I knew how to program. I just a little rusty. I learned Ruby on Rails and then I joined GoCardless full-time. That was great because I became full-stack insofar as I had to do BD things, which essentially means growth now. If I wanted to, I don't know, ship a change to the website, I had to do that myself. I couldn't take any engineering time. I just had to learn to operate myself. I was lucky because those founders were incredible mentors. Matt Robinson is now a big angel investor in Europe. He was one of the founders. He took me under his wing. He was like, "Look, we're going to do sales together. I'm going to teach you how it goes." He gave me a bunch of books to read. I read them. I think we talked about this before, but a lot of the content on how to sell is really, really bad. I'm actually constantly surprised that when I meet somebody who knows me from my online presence, over half the time, they actually mentioned this how to sell essay that I wrote as one of the things they liked the most.</p><p>It's funny to me because I think it's really badly written. I just wrote it one day, annoyed, and just shipped it without revising it very much. It's had a life of its own. I think it's just because a lot of the content on selling out there is bad. What I figured out, and this goes back to our convo earlier about reverse engineering, social interaction, how the autist understands, is that you are actually able to reverse engineer sales. It's not as complicated as people make it out to be. Moreover, a lot of the tactics that people teach you are very counterproductive. It's like sales is not about selling, you have to qualify the prospect.</p><p>Often it works better just to say, "Look, the product actually isn't for you, you're a benefit elsewhere." What it is, is just knowing who your target customer is really well, understanding what they do right now and where the pain is, then once they're aware of that problem, then just explaining your product very neutrally. People are smart. They will know if they want to buy your thing. You don't have to convince them.</p><p>Typically, it's that they don't have the problem. That's the real issue with selling. That's one thing I figured out. Enterprise sales is its own beast. It's a very complex process. There's lots of stakeholders that you have to get aligned. We can talk about that as well. These are just things I learned by doing very, very fast in a very intense environment.</p><p><strong>[02:11:58] Dan: </strong>What do you think was the key cultural aspect of that company when you joined? For you, if you're looking at a startup, what do you think is the most important cultural aspect that you would look for if you're going to guess whether or not it's going to be successful?</p><p><strong>[02:12:10] Nabeel: </strong>It definitely set the blueprint for me. One thing I think that shouldn't be controversial is that it's much more impressive for somebody to found a billion-dollar company in early 2010s UK than it is to do that today in the US because the US is a bigger market, you know what I mean. It's like if someone did that in the UK in the 2010s, that's just much harder, I think. These guys did this. A couple of them have done it twice. Clearly, there's something going on there. I think what it was for me that was very striking was extreme intensity. People were brutally competitive. It's unfashionable now, but people would get in at seven o'clock and leave at 10:00 PM every single day.</p><p>I just got used to working like that as well. At the time, it was a real grind. Everything had to be done now. It was unacceptable to be like, "Okay, I'll take care of this tomorrow, or that's not my job, or things like that." Everything was your responsibility. You had to do it with extreme dispatch. There was a lot of value placed on having a primary focus and being very clear about that. If anyone asked you at any point what's your primary focus, you had to be ready with an answer. I think you'll find these traits in common to a lot of outlier success, early stage startups.</p><p><strong>[02:13:28] Dan: </strong>How important do you think the idea is to a good startup? One thing I sometimes wonder is, can the best founders actually just enter super crowded markets and win?</p><p><strong>[02:13:37] Nabeel: </strong>I think the idea does matter a lot. One thing about startups is you have to hold all these opposites in your mind constantly. I think two things are true. One is that the idea does matter a lot. The other thing is that it's also a starting point. These feel a little contradictory. I think the reason the idea matters a lot is because it is important to have some differentiated insight into the world if you're going to do a startup and do it well.</p><p>Whether it's some particular area of the economy is broken in a way that feels important or whether it's that-- In the case of something like Palantir, the foundational insight was that you didn't need to trade off between civil liberties and security in the way that most people thought about. You could actually have software that enhanced both. I think that wasn't something that most people were thinking about at the time. The dichotomy was either you have surveillance software or you're a freedom fighter and you're very anti-surveillance software. This is fine, but I think there's a lot of interesting room in the middle to enhance both things that people care about.</p><p>Similarly, with both GoCardless and Stripe, it's a view that there's a part of the economy where having a good legible API to it would be very, very important and unlock an insane amount of value. Stripe executed on that insight incredibly well. They had this developer first strategy in the early days and now it's materially impacting the GDP of the internet. I think having that differentiated insight is super important. At the same time, it is just the case that a lot of startups end up pivoting substantially from their initial idea. I do feel like the startups that tend to be bigger, tend to be the ones where the pivot is maybe a little bit less distant from the initial idea though.</p><p>It tends to be that the idea was had, maybe they had to tweak it a bit, but they tried it and it basically took off. [chuckles] I think if you look at those examples where it's like, "We tried eight different ideas and eventually one caught on," they still do well, but they don't necessarily become the Microsofts, or the Apples, or the Googles of the world. If you think about those three unicorns, it's not that all of them worked immediately, but none of them really departed that significantly from what they were trying to do from the beginning.</p><p><strong>[02:16:01] Dan: </strong>What do you think that Peter Thiel saw in Alex Karp from the Palantir?</p><p><strong>[02:16:05] Nabeel: </strong>Alex is a really, really interesting guy. He's obviously brilliant. I think that's the first thing. It is a bit strange because he's a philosophy graduate. He did his dissertation on Habermas and then he was asked to lead this company of very gifted Stanford software engineers mostly. The question is valid. It's like, why did he do that? I think Alex is really interesting. I think what a lot of people don't know is that he ran a successful hedge fund before becoming the CEO of Palantir. He ran this in Europe, I think in Switzerland or something, and he was already doing very well from that.</p><p>It's not like he needed a job as it were. I think he has a number of superpowers that make him very valuable. I think the two that spring to mind right now, one are, he has this insane Spidey sense for talent, where he does these very non-traditional interviews that can be very short. He'll ask very strange questions and then he'll have a pretty strong opinion on whether that person's good or not. Often it'll be someone who you wouldn't necessarily think is good, but he'll be like, "This person's brilliant," and he'll often turn out to be right. Early Palantir as a result was this magnet for talent. If you look at the 2010 to 2015 era, some of the most brilliant engineers in the valley were working there.</p><p><strong>[02:17:28] Dan: </strong>What was he doing? What do you think he had? Is it just a superpower or what were these questions?</p><p><strong>[02:17:35] Nabeel: </strong>I think it's this non-legible Spidey sense. I don't know that it can be taught. He asks strange stuff. I don't want to give too much away, but it's almost like having a personal chat with him about nothing to do with work. Then at the end of it, he'll come out with a pretty high dimensional view of you.</p><p><strong>[02:17:54] Dan: </strong>Interesting.</p><p><strong>[02:17:55] Nabeel: </strong>That's one thing. He has this notorious Spidey sense for talent. Then I think the second thing is in sales and closing deals. If you look at the customers Palantir has, they work with most of the governments in the Western world. They work with a lot of the Fortune 10, Fortune 500. They work with clients that a lot of tech companies will not typically get to work with, including things like intelligence services. A lot of that is down to Alex being very good at moving among that echelon of people.</p><p><strong>[02:18:29] Dan: </strong>Notably, he's a philosophy guy and so are you. Peter Thiel, of course, is as well. There's not a ton of these in tech, but it does seem like there's many really high-performing ones. How do you think the two relate?</p><p><strong>[02:18:41] Nabeel: </strong>Reid Hoffman is another one. Obviously, Peter is a great example. The relationship is I think pretty simple. It's like one, if you did philosophy, you're probably very curious about the world. Now, again, not always, but it does correlate. That's one thing. Two is that philosophy really teaches you how to think very rigorously and logically from first principles. This was the thing I took away from it. The way it's taught at Oxford, everyone has to do a bunch of classes in formal logic in their first year. You just get very, very good at taking any argument and just decomposing it into, "Okay, premise, conclusion, this doesn't quite follow from this and so forth."</p><p>You can get very good at that. I think that's very useful in reasoning about the world. It's this generically powerful skillset that I think everyone should acquire. If you're already good at this, you're a skillful engineer. I don't necessarily think you should spend a lot of time reading Plato. You could do it for fun or if you want to enrich yourself generally, but there's diminishing returns to going deep into Nietzsche or something like that, unless you're curious about those questions. I think everyone should get that formal logic skillset under that belt.</p><p><strong>[02:19:56] Dan: </strong>From your time at Palantir, were there any of Peter's traits that you felt were really sailing it throughout the company?</p><p><strong>[02:20:02] Nabeel: </strong>Definitely. I think he set the cultural blueprint for the company in a lot of ways. I think one is the intensity thing I talked about earlier. He's a very intense person. It's a very intense company. People worked very, very long hours. People worked extremely hard. There was no excuse for not winning, basically. It's an intense culture, which to me, I think correlates with success. I think two is the value of secrets. I think he likes to maintain a lot of illegibility around what he's doing.</p><p>I think the company had this for a long time, much less so after it went public. There was this air of mystery about it. To this day, the meme is, nobody can tell you how it does. That's true. Underneath it, there is a real thing. He has this line, which is, substance always wins over style, focus on substance. It is substantive, but there's also a lot of secrecy around it. I think that can lead people to underestimate how important the company actually is.</p><p><strong>[02:21:10] Dan: </strong>Were there any things that you noticed that Palantir did exceptionally well that you think other companies are missing? You already hit on some of them, the intensity.</p><p><strong>[02:21:18] Nabeel: </strong>I think so. I think there's a few things that come to mind. One is just that I do think working with sectors of the economy that are harder to figure out is very, very valuable. What I mean by that is things like the military, intelligence, crime, whatever it is. These are areas that most tech companies and tech people don't want to touch. The simple answer to that is that if you don't touch them, they will be worse and they will be more unjust. I think this belief in getting into the muck is really helpful. I think the second thing is the culture is very much one of going onsite.</p><p>Meaning the expectation of you as a software engineer was that you were going to visit your customer a lot, and work alongside them, and sit by them day by day. When I was in the airplane factory, I was working with aeronautical engineers. I was literally talking to the guys who were hammering bolts onto planes and things like that. Similarly, when I worked at NIH, I was working side by side with biologists, bioinformaticians, cheminformaticians, all these people all day, every day.</p><p>I think what you learn deep in your bones, and I think the reason why a lot of pounds here, and to leave, and do startups is that, you have to get out of the building. You have to get out of the room and you have to go and talk to people and talk to customers as much as possible and learn how they work very intimately, and then build technology to magnify their powers. I think there was a recent YC batch where there were more Palantir founders and they were ex-Google founders.</p><p><strong>[02:22:59] Dan: </strong>Oh, interesting.</p><p><strong>[02:23:00] Nabeel: </strong>Hilarious because Google is one order of magnitude, more employees at least. It's a very foundry culture.</p><p><strong>[02:23:10] Dan: </strong>It's been a good two and a half hours. Nabeel, where, if people want to find you online, can they find more of your work, your talk, whatever?</p><p><strong>[02:23:18] Nabeel: </strong>My handle online is generally nabeelqu, so N-A-B-E-E-L-Q-U. Twitter is usually where I'm most active. I recommend people go visit my website. That's nebeelqu.co. DM me, say hi, let's be friends.</p><p><strong>[02:23:34] Dan: </strong>Thank you for your time today. This was a lot of fun.</p><p><strong>[02:23:36] Nabeel: </strong>Thank you.</p>]]></content:encoded></item><item><title><![CDATA[Sarah Constantin]]></title><description><![CDATA[Ultrasound neuromodluation, AI, and cancer research]]></description><link>https://www.danschulz.co/p/sarah-constantin</link><guid isPermaLink="false">https://www.danschulz.co/p/sarah-constantin</guid><dc:creator><![CDATA[Dan]]></dc:creator><pubDate>Mon, 26 Feb 2024 12:01:55 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/142057779/0877981f58c5aff226dce53714efe278.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Sarah Constantin works at <a href="https://nanotronics.co/">Nanontronics</a>, a company building AI-controlled factories for every manufacturing industry. She has a Math PhD and previously worked on machine learning for drug discovery, ML video analysis for self-driving trucks, and was a data scientist at Palantir. </p><div id="youtube2-tuBeEWyIDLo" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;tuBeEWyIDLo&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/tuBeEWyIDLo?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8ad6d77d92ed0526217e21c283&quot;,&quot;title&quot;:&quot;Sarah Constantin&quot;,&quot;subtitle&quot;:&quot;Dan Schulz&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/2Kqt1U6SU6M7VvLtL47htz&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/2Kqt1U6SU6M7VvLtL47htz" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/undertone/id1693303954?i=1000646853274&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;podcastTitle&quot;:&quot;&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:&quot;&quot;,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;&quot;,&quot;releaseDate&quot;:&quot;&quot;}" src="https://embed.podcasts.apple.com/us/podcast/undertone/id1693303954?i=1000646853274" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><h3><strong>Timestamps</strong></h3><p>(0:00:00) Intro</p><p>(0:00:22) Ultrasound neuromodulation</p><p>(0:02:33) Where is it in the tech lifecycle?</p><p>(0:03:48) Why should it be possible?</p><p>(0:07:44) How impactful will neuromodulation be?</p><p>(0:12:08) Startups working on neuromodulation</p><p>(0:16:33) Could we read minds?</p><p>(0:18:54) Public acceptance of neuromodulation</p><p>(0:27:36) Neuromodulation vs AI</p><p>(0:31:55) AI and drug discovery</p><p>(0:35:53) AI x-risk</p><p>(0:43:34) What would make Sarah worried about AI?</p><p>(0:47:00) Is human intelligence simple?</p><p>(0:51:01) Probability of solving aging</p><p>(0:56:28) Is cancer research doing something avoidably wrong?</p><p>(1:05:48) Are aesthetic judgements a kind of moral judgement?</p><p>(1:09:44) What should a 14 year old do to understand Sarah&#8217;s view of the world?</p><h3><strong>Links</strong></h3><ul><li><p><a href="https://sarahconstantin.substack.com/">Sarah&#8217;s blog</a></p></li><li><p><a href="https://twitter.com/s_r_constantin">Follow Sarah on X</a></p></li><li><p>Sarah's post on <a href="https://sarahconstantin.substack.com/p/ultrasound-neuromodulation">&#8288;ultrasound neuromodulation&#8288;</a></p></li><li><p>Sarah's post "<a href="https://sarahconstantin.substack.com/p/why-i-am-not-an-ai-doomer">&#8288;Why I am not an AI Doomer&#8288;</a>"</p></li><li><p>Sarah's post "<a href="https://srconstantin.github.io/2015/07/11/aesthetics-are-moral-judgments.html">&#8288;Aesthetic judgements are moral judgements&#8288;</a>"</p></li><li><p><a href="https://twitter.com/dnschlz">&#8288;&#8288;Follow Dan on X&#8288;&#8288;</a></p></li><li><p>Subscribe to Undertone on&nbsp;<a href="https://www.youtube.com/channel/UCtRjQ8EA3Via7KXzv9F173Q">&#8288;&#8288;&#8288;YouTube&#8288;&#8288;&#8288;</a>,&nbsp;<a href="https://podcasts.apple.com/us/podcast/the-world-of-yesterday/id1693303954">&#8288;&#8288;&#8288;Apple&#8288;&#8288;&#8288;</a>,&nbsp;<a href="https://open.spotify.com/show/59YkrYwjAgiKAVMNGWPaLE">&#8288;&#8288;&#8288;Spotify&#8288;&#8288;&#8288;</a>, or&nbsp;<a href="https://www.danschulz.co/">&#8288;&#8288;&#8288;Substack&#8288;&#8288;&#8288;</a></p></li><li><p>Watch or listen to previous episodes of Undertone with <a href="https://www.danschulz.co/p/tyler-cowen">&#8288;Tyler Cowen&#8288;</a>, <a href="https://www.danschulz.co/p/vitalik-buterin">&#8288;Vitalik Buterin&#8288;</a>, <a href="https://www.danschulz.co/p/scott-sumner">&#8288;Scott Sumner&#8288;</a>, <a href="https://www.danschulz.co/p/samo-burja">&#8288;Samo Burja&#8288;</a>, <a href="https://www.danschulz.co/p/3-steve-hsu">&#8288;Steve Hsu&#8288;</a>, and <a href="https://www.danschulz.co/">&#8288;more&#8288;</a></p></li><li><p>I love hearing from listeners. Email me any time at dan@danschulz.co</p></li><li><p>Share anonymous feedback on the podcast: <a href="https://forms.gle/w7LXMCXhdJ5Q2eB9A">&#8288;&#8288;https://forms.gle/w7LXMCXhdJ5Q2eB9A</a></p></li></ul>]]></content:encoded></item><item><title><![CDATA[Alexey Guzey]]></title><description><![CDATA[Science, productivity, advice, and utilitarianism]]></description><link>https://www.danschulz.co/p/alexey-guzey</link><guid isPermaLink="false">https://www.danschulz.co/p/alexey-guzey</guid><dc:creator><![CDATA[Dan]]></dc:creator><pubDate>Tue, 13 Feb 2024 16:48:23 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/141639460/9a4a6ceee881bf88ebb6c0bd5ae012af.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h3><strong>Timestamps</strong></h3><p>(00:00) Intro</p><p>(00:23) How accessible should a good research paper be?</p><p>(03:47) Good taste</p><p>(07:07) What does Russia get right?</p><p>(12:05) Favorite Dostoyevsky novel</p><p>(14:16) Utilitarianism</p><p>(16:42) High IQ v genius</p><p>(17:59) Tyler Cowen&#8217;s advice</p><p>(20:13) Bad science</p><p>(31:25) Productivity</p><p>(32:55) Updating beliefs</p><p>(35:18) Meditation</p><p>(37:53) Religion</p><p>(40:02) Starting a blog</p><p>(41:20) Video games</p><p>(42:48) Go outside</p><p>(45:19) David Goggins</p><p>(49:23) Advice</p><p>(51:04) True Detective</p><p>(56:05) Alpha in low status</p><h3><strong>Links</strong></h3><ul><li><p><a href="https://guzey.com/">&#8288;Alexey's blog&#8288;</a></p></li><li><p>Follow <a href="https://twitter.com/alexeyguzey">&#8288;Alexey on X&#8288;</a></p></li><li><p><a href="https://twitter.com/dnschlz">&#8288;&#8288;Follow Dan on X&#8288;&#8288;</a></p></li><li><p>Subscribe to Undertone on&nbsp;<a href="https://www.youtube.com/channel/UCtRjQ8EA3Via7KXzv9F173Q">&#8288;&#8288;&#8288;YouTube&#8288;&#8288;&#8288;</a>,&nbsp;<a href="https://podcasts.apple.com/us/podcast/the-world-of-yesterday/id1693303954">&#8288;&#8288;&#8288;Apple&#8288;&#8288;&#8288;</a>,&nbsp;<a href="https://open.spotify.com/show/59YkrYwjAgiKAVMNGWPaLE">&#8288;&#8288;&#8288;Spotify&#8288;&#8288;&#8288;</a>, or&nbsp;<a href="https://www.danschulz.co/">&#8288;&#8288;&#8288;Substack&#8288;&#8288;&#8288;</a></p></li><li><p>Watch or listen to previous episodes of Undertone with <a href="https://www.danschulz.co/p/tyler-cowen">&#8288;Tyler Cowen&#8288;</a>, <a href="https://www.danschulz.co/p/vitalik-buterin">&#8288;Vitalik Buterin&#8288;</a>, <a href="https://www.danschulz.co/p/scott-sumner">&#8288;Scott Sumner&#8288;</a>, <a href="https://www.danschulz.co/p/samo-burja">&#8288;Samo Burja&#8288;</a>, <a href="https://www.danschulz.co/p/3-steve-hsu">&#8288;Steve Hsu&#8288;</a>, and <a href="https://www.danschulz.co/">&#8288;more&#8288;</a>.</p></li><li><p>Share anonymous feedback on the podcast: <a href="https://forms.gle/w7LXMCXhdJ5Q2eB9A">&#8288;&#8288;https://forms.gle/w7LXMCXhdJ5Q2eB9A</a></p></li></ul><h3>Transcript</h3><p><strong>[00:00:12] Dan Schulz: </strong>Okay. Today, I'm talking with Alexey Guzey. He leads a research nonprofit called New Science and blogs at guzey.com. Alexey, welcome.</p><p><strong>[00:00:21] Alexey Guzey: </strong>Thanks for having me.</p><p><strong>[00:00:23] Dan: </strong>First question, how accessible should a good scientific research paper be?</p><p><strong>[00:00:28] Alexey: </strong>I'm not sure I have a take on this, or rather I'm not sure if accessibility per se should be the goal or lack of accessibility should be the goal. I guess scientific papers are very inaccessible, usually, but it's expected in any kind of specialized endeavor. I don't think it's particular to science. It's probably the same in-- Well, continental philosophy is famous for being inaccessible, for example.</p><p>Ideally, of course, we do want papers to be accessible, but there are trade-offs. There is a reason why jargon exists, and it's because you compress information. There is this trade-off between being dense in information and accessible in a way to other scientists. I think there are definitely benefits to using jargons. For example, you can just fit more meaning in fewer words, right? I think probably that helped to process information.</p><p>Again, the other thing is there's so much science popularization. I know. I watch a lot of science YouTube videos or scientific journals, like Quanta and stuff. Papers not being accessible, I'm not sure it should be treated as a problem or something per se, asides from when it's-- I was going to say aside from when it's deliberately abstruse, but I'm not sure if there's even a way to determine if it's deliberately abstruse or not or if it's just culture or something.</p><p><strong>[00:02:23] Dan: </strong>I find your writing often incredibly accessible, and sometimes you take really complex topics. That's where the question came from.</p><p><strong>[00:02:30] Alexey: </strong>Well, because I write for public, and scientists write for scientists. If I were writing for scientists, I would be writing very differently.</p><p><strong>[00:02:41] Dan: </strong>I find it interesting, though, some scientists, I had Steve Hsu on the show, and he blogs a lot, but he also writes a lot of scientific papers. His papers come through to me as super, super easy to read relative to other people in genomics and machine learning. I get really lost in a lot of those. It seems like there's a connection there because he blogs so much that it almost translates over into his papers. It's interesting.</p><p><strong>[00:03:07] Alexey: </strong>Yes. I wonder if it's something like, actually, "Why does Steve Hsu blog?" He blogs because he wants to express his ideas, and he wants to be known to the public. My guess would be that not that blogging translates back into papers, but there's something about Steve where he wants to do particular things leads him to prioritize papers or blog posts being very accessible, more accessible than for other scientists who are fine with papers only being understandable to experts in the field, right?</p><p><strong>[00:03:47] Dan: </strong>Yes, that's a really good point. How do you cultivate such good intellectual taste? Your links and best of Twitter are some of the best out there.</p><p><strong>[00:03:56] Alexey: </strong>I'm not sure I cultivate it. The framing of the question, I guess I'm tempted to say that I read a lot or something, and I consume a lot of content, and this helps. This does help, but is it the key to what you would consider good taste? I'm really not sure. I feel like, honestly, it's just something that maybe always has been present, and I'm not sure I have much control either way.</p><p>The thing I do is just share things that I find interesting or read things that I find interesting. Why do I find these things interesting? I guess if I try to unravel this, it's going to be something about learning. It's difficult for me to focus on things that are boring, and things that are boring to me are usually things that I'm reading and I'm not learning much. I think this being tuned to a significantly higher degree than I guess is typical is a significant part responsible for a wider surface area.</p><p>Then also, I guess here's the thing that if I read what I consider to be interesting and the things that I learn, then it's natural that other people are going to also learn that. In contrast, if you're not very interested in learning per se, then, well, you're going to be reading things that you don't learn, and then you're going to share things where you don't really learn. People do find things interesting, things that by consuming they learn something interesting.</p><p>Curiosity is a big part of this, but how do you cultivate this? I guess it's just always been there or something. I think it's very easy for me to be bored. I've been thinking also about this. Returning back to your question about accessibility of writing, people often tell me that my writing is unusually clear or something. I think the reason is because, well, first, I spend a lot of time editing but second, I edit until I myself find reading what I write interesting.</p><p>Because it's very easy for me to be bored, then I have to really struggle and spend a lot of time making things that I write interesting for myself. As a result, they also end up being interesting to other people and then I'm being clear to other people. I think it's a general heuristic, and it's a way to think about-- I'm not sure I think about it consciously all that much, but it's definitely a big factor in what I publish or write or share or tweet is whether I would subscribe to my blog.</p><p>If I saw the blog post that I'm going to publish, would I subscribe after reading the blog post? If The answer is no, I'm much less likely to publish the thing because it's probably not very interesting, right?</p><p><strong>[00:07:07] Dan: </strong>Yes, makes sense. What does Russia get right that the US gets wrong?</p><p><strong>[00:07:12] Alexey: </strong>Maybe the answer is something like Russia is a more cultural country. In a way, I think one thing that I've been pondering recently that's very relevant to this is "Why did communism only succeed in Russia?" I think the reason for this is that Russia has this uniquely idealistic culture in a way, where people do take ideas seriously, where the there's many communist parties, but I'm not sure communism could ever win in America because America is both more optimistic but also more realistic.</p><p>America is grounded in reality much more than Russia, and Russia is a very tragic country, and it's a very idealistic country, and people take ideas seriously there to convince the population of this large, poor, mostly agrarian country to support the communist regime and not only the communist regime because China also had communism. It's remarkable to me how China, as soon as Mao Zedong died, the very next chairman of Chinese Communist Party basically abandoned everything.</p><p>They were like, "Okay, that guy is dead. Let's go and do something reasonable. Let's keep building capitalism. Let's get rich." Now they have an extremely hyper-capitalistic society. This basically started as soon as the first leader of the party died. In Russia, they actually kept building socialism. Well, they were living in socialism, and they kept building communism very seriously for generations.</p><p>Lenin died, and Stalin really did believe in communism, and all the high party functionaries really believed in communism. Khrushchev after Stalin really believed in communism and I think even Brezhnev probably. When I talk to my dad, for example, he was born in '64. I think by '70s, '80s, people stopped believing in this, but for decades, the Soviet Union was trying to build communism by the year 1980. This is totally insane and impossible, the whole idea.</p><p>Socialism, in the way that the Soviet Union was building, just did not work and could not work. The communist vision that they were building also probably just could not really work, yet maybe not the entire country but the people who were running the country really did believe in this. This is, to me, really fascinating, I guess. Is it right? Is it something that Russia gets right and America gets wrong? Soviet Union was not something good that happened to the country. It was terrible in a myriad of ways, but I guess it's something that's really fascinating to me. Again, this idealism of the elites versus the realism of the elites, there is something very fascinating about this.</p><p><strong>[00:10:30] Dan: </strong>That's really fascinating.</p><p><strong>[00:10:32] Alexey: </strong>I'm also thinking about, here's something fun as well. Russia is famous for mathematics and for writers, right, Dostoevsky, Tolstoy, the Russian culture, especially 19th-century culture. I think Russian literature is on a whole another level. Again, I think it's all from the same root of this idealism of the elites where, Europeans, they create great culture, but there is no-- I think it's either realistic or it's nihilistic. It's not like this. I'm not sure that Western Europe or the US could have ever produced the Dostoevsky, for example.</p><p>Again, going back to math, I think, again, it's the same route. It's like math is the thing where you totally retreat from the real world, and you're just in the world of ideas and imagination. I think it all stems-- it's all about this idealism of the elites, and it's something that seems to me to be very uniquely Russian, I guess, where it's this northern country where people live in these hard conditions and have to think to prepare for the winter and to think about how to survive and all the very difficult things.</p><p>At the same time, it's very authoritarian, and the power is very centralized. There is a lot of really smart people who don't really have much to do, and they produce really great art, and they think a lot about ideas, and they do great math. That seems something to be uniquely Russian.</p><p><strong>[00:12:06] Dan: </strong>Yes, that is fascinating. What's your favorite Dostoevsky novel?</p><p><strong>[00:12:11] Alexey: </strong><em>Brothers Karamazov</em>.</p><p><strong>[00:12:13] Dan: </strong><em>Brothers Karamazov</em>. Okay. I just read <em>Anna Karenina</em> by Tolstoy a couple of weeks ago. I finished it, and I just started <em>Demons</em>, but I haven't read any other Dostoevsky. This is the first one.</p><p><strong>[00:12:24] Alexey: </strong><em>Demons</em> is also great. I'm personally less partial to Tolstoy than to Dostoevsky. I feel like Tolstoy doesn't accept reality as it is or something. He's idealistic but in a way that's almost not fully honest with himself or something. If you read Tolstoy's <em>Confessions</em>, for example, he wrote it, I think, two years after finishing <em>Anna Karenina</em>. He basically denounced all of his previous writings.</p><p>He was like, in <em>Confessions</em>, he wrote that "Well, I wrote this <em>War and Peace</em>, and I wrote <em>Anna Karenina</em>, and now I'm the greatest writer in the world. I wrote the best novels. Now I really don't know what to do with my life. I realized that I was only writing all of this stuff in order for people to like me because I wanted to be a great writer or something, and it really doesn't matter in the end." I feel like he did have this realization, and it's true, but he did not magically change. He still kept being Tolstoy.</p><p>He was 45, I think, when this happened to him. I don't know. I feel like it's really painful, and it's terribly, terribly difficult to write something that's really, really honest. It seems that Dostoevsky succeeded at this, maybe because he was literally almost executed when he was 25, right? He was like, "Well, I have nothing else to lose. I might as well just be honest." I feel like Tolstoy, even though he was at war, he never quite reached the stage of where he was able to just stop worrying about other writers at St. Petersburg balls liking him. He just always kept looking for that.</p><p><strong>[00:14:16] Dan: </strong>Yes. No, that makes a ton of sense. That's a good analysis. Why do you think smart people are so easily seduced by utilitarianism?</p><p><strong>[00:14:25] Alexey: </strong>I don't think it's about smart people. I think it's a very particular kind of smart people. Again, it's like our entire discussion is about idealism. I think utilitarianism is this particular source of idealism and in fact the kind that Dostoevsky was arguing very passionately against. The reason utilitarianism is seductive to smart, highly systematizing, largely depressed people is because it provides us very clear direction in life.</p><p>Especially if you're an atheist, then there is no purpose in life. There is no meaning in life.</p><p>Universe is just what it is, and everything is random. There is no meaning. Utilitarianism, if you have this very mathematical mind and you want everything to be clean and simple and beautiful and abstract and there to be systems and with a small number of moving parts than utilitarianism and you're in this situation where you're lost in life and there is no meaning, everything is random, and you want things to be simple and beautiful and abstract, then, of course, you're going to go to utilitarianism because there is one utility function.</p><p>It's very mathematical, very beautiful, expected whatever, pleasure minus suffering. It doesn't matter that you can't calculate it. It doesn't matter that you don't know what the future holds. All of these things fade away in the face of the beauty of the single function to optimize. It's a very meaningful function because also if you suffer a lot and you're depressed, then you're like, "Oh, yes, that's actually becoming happy, and making other people happy as well is a great thing to pursue in life."</p><p>I think that this combination of-- there is also something very scientific about utilitarianism because math is science. It's such a perfect replacement for whatever it is that people lose when they become atheists. That leads people to really cling to utilitarianism, I think.</p><p><strong>[00:16:43] Dan: </strong>You called out that it's not smart people necessarily; it's a certain type of smart people. You also have a post talking about the difference between high IQ and genius. What do you think separates those two?</p><p><strong>[00:16:54] Alexey: </strong>Well, high IQ is about the raw processing power, basically. It's easy to have very high raw processing power and at the same time to have the same perspective as everyone else. Genius is much more about having a different perspective, and it's much more about synthesizing different perspectives and looking at things in a way that people did not look at previously.</p><p>I think a lot about Michael Nielsen's analogy, where he distinguishes scientists who, I guess, work in existing fields and create new fields. I think in order to create a new field, you have to ask questions that people did not ask before. This is much less, I think, about raw processing power and being able to solve particular problems and much more about poking problems, so figuring out problems that other people have not already thought about. I guess this is what genius is about, much more so.</p><p><strong>[00:17:59] Dan: </strong>You have one post that has puzzled me for a while, and I saw some people in the comments on marginal revolution puzzling over it a little bit, too, which is your advice for Tyler Cowen. You have one of them which says recursive self-improvement requires a closed system. What does that mean?</p><p><strong>[00:18:17] Alexey: </strong>Well, my memory of what Tyler might have said or might not have said, maybe it's all a dream that as soon as you're-- Basically, it's about machine learning, right? People think about super intelligence. People think about fast takeoff and us building AI that is able to recursively self-improve. What I take this point to mean is that this is basically not really going to happen because AI is going to recursively self-improve but only as soon as it doesn't touch the real world.</p><p>Then the amount of data, the amount of patterns in the world that it can learn about is very limited. Well, maybe it will recursively self-improve math or something, but as soon as you want to touch the real world, as long as you want to predict the real world and discover the laws of the real world, you need the data from the real world as well. This is an open system and then you run into very, very different constraints because collecting data from the real world is slow.</p><p>It's orders of magnitude times slower than just being in your head or in an AI's head and just thinking. This recursive self-improvement basically stops working in a way that people imagine it to work. It basically works the way our civilization already works and I guess humanity already works where humans are in a way recursively self-improving. We've been living this recursively self-improving the regime for, what, 10,000 years now. I guess things again are working, and it's not just blowing up with triple exponentials. It's very nice and it's slow single exponentials. [laughs]</p><p><strong>[00:20:15] Dan: </strong>Yes. I found this interesting. You had a 2019 post where it's called How Life Sciences Actually Work: Findings of a Year-Long Investigation, and you come out of it basically arguing that academia has a lot of problems, but it's less broken than it seems on the outside, which is pretty contrarian in a lot of groups to say that academia is doing a lot of things well, but then you call back and in 2021 you're like, "Well, actually no, I'm more pessimistic. This is why I started New Science, the nonprofit." Can you explain a little bit about what happened over those two years to make you change your mind?</p><p><strong>[00:20:48] Alexey: </strong>I think the biggest thing that happened is just recalibration because I think especially if you can go out with tech people, then there is this picture of academia just not working, and people often write about the death of academia and science almost sliding back in time. I think this is not very different from what I thought when I first started to think about this. I was like, "Oh, everything is terrible. Everything is broken."</p><p>After a year of actually looking into this, in 2019, I was like, "Wait, actually, yes, lots of things are broken, but things are still working. There's lots of problems, but things are not just as absolutely horrible as people think." This update from me expecting things to be really horrible to then being like, "Oh, wait, actually there are good parts" caused me I guess to maybe over-update at least for my mood to change it, for me to be like, "Oh, wait, actually, things are not very bad."</p><p>Then after two more years, I was like, "Wait, things are actually really bad. The fact that there are good parts and the fact that it's possible to do things does not mean that things are not really broken." An analogy to SpaceX could be we were still able to launch rockets in the year 2000, and NASA was still operational. Hypothetically, in fact, without SpaceX, they could have landed another person to the moon probably in a few decades. All of this does not contradict the fact that things were still terrible in all kinds of ways.</p><p>If you came in with the expectation that we can't even get a rocket out in space, then I think learning that we actually can would make you pretty pessimistic on your plans of whatever alternative you are thinking about. At the end of the day, SpaceX still makes sense. I still think it makes a ton of sense to really basically rethink how academia should work and how we want to go about doing science. It does seem that the technology of science--</p><p>I feel like the primate of truth above everything else and the integrity that used to sustain science from the inside is really, really getting eroded where universities-- Well, I guess everyone knows. I'm best known by my debunking of this book by Matthew Walker, who is a professor at UC Berkeley, where this famous neuroscience professor wrote a pop science book. It is basically pseudoscience. He just makes things up endlessly.</p><p>He writes that sleep will kill you in all kinds of ways, will double your odds of getting cancer. Sleep is the best thing in the world. Lack of sleep is the worst thing in the world. I looked into the science and looked at his citations, and the reality is totally different from what he writes. It's really not clear what the relationship between sleep and health and cancer and all these things really is. Then I published this, and lots of the scientists read what they wrote, and they were like, "Yes, this makes sense."</p><p>Then it turns out that, at some point, he manipulated data actually to cut out part of the graph that was going against his argument, and then nothing happened to him. UC Berkeley actually apparently looked into this, and they decided that it's all totally fine, and he has been a good scientist, basically making data up, lying shamelessly to the public. I looked at his papers, and this is really funny because I actually sent a data request to one of the papers he published in a journal where authors have to send the data upon request.</p><p>Then they never replied to me and then they wrote to the editor. Then I think eventually, if I remember correctly, they sent the data two months later but then simultaneously published in a correction to the paper that was like, "Oh, by the way, we discovered a bunch of mistakes, and they don't change any of our major conclusions." It was like, "Come on, the one paper that I requested the data from, this happened."</p><p>The fact that UC Berkeley does nothing and the fact that there's many other scientists who were extremely credibly accused of not just lying to the public but of actual scientific fraud and universities largely just don't really seem to care, this really worries me. It seems that something really needs to be done about this. I am not sure if entirely new academia needs to be built or if it's possible to reform academia or what the right way to go about doing this is exactly. Well.</p><p><strong>[00:25:51] Dan: </strong>What would you recommend that the general public do to fight back against this? Notably, you said you spent over 130 hours researching to write that Matthew Walker essay. The average person who's just shopping for a pop science book and wants to read about sleep isn't going to have that much time. Do you have any heuristics or recommended ways that the average person can get a gut check? Maybe another way to phrase this question is actually going back, for yourself, how did you first recognize that maybe there was something wrong here?</p><p><strong>[00:26:21] Alexey: </strong>I think that was the first part. For an average person, the best thing to do is if you're starting to read a pop science book, check one citation at random and just see whether what the abstract of the paper says corresponds to what the book you are reading says. If they do not match, it's probably not a good sign. If you did this check with "Why We Sleep," then you would very quickly discover that something is not quite right.</p><p>In terms of how I realized that something is not right, well, I guess I did not do the same. I just started reading, and in the first paragraph, there was something that I thought sounded very suspicious. There was a claim, if I remember correctly, about lack of sleep doubling the odds of getting cancer. I was like, "First, this sounds really wild and just doesn't pass my vibe check, I guess." Then I tried to imagine, "Okay, how would you design the study that would result in such a discovery?"</p><p>I was like, "Well, there is no way in hell anyone could have run such a study." You would need a really giant RCT that makes some people sleep less, some people sleep more, and then track them over 10 or 20 years or something. Obviously, nobody has ever done anything like this, which means that at best, this claim in the very first paragraph of the book is based on some probably really low-quality correlational data where all kinds of a million different factors influence each other and then he found one correlation and then made a claim that "Oh, this causes this" when it's just this giant network of interrelationships.</p><p>Then I looked into the data and then there is just nothing. There is some occasional sleep and cancer correlation. There is nothing really systematic. I guess this is how I realized that something is not quite right. Then in the entire first chapter, I just kept reading things that it was like, "Okay, there's no way he could have gotten this evidence. This is obviously wrong. Something is not right here." Then wherever I would look at something like this, things just didn't check out.</p><p><strong>[00:28:45] Dan: </strong>Do you know anything about how these-- Do these books go through checks before they get published? I guess in your view, how is it that books like these can get published and get so popular without a bunch of people noticing?</p><p><strong>[00:29:00] Alexey: </strong>I think people do notice. They just don't care. A few neuroscientists actually wrote to me, professors and grad students and postdocs, after I published my piece and they were like, "Oh, yes. We knew that his science is not very good. The book is an embarrassment to the field, but he is the single most prominent researcher in the field. He runs a large sleep center, and he is really well-published. He has a lot of power and control, and also he brings a lot of money to the sleep research field."</p><p>The people who could notice this are in a position where they're either dependent on Walker himself, or they are afraid of repercussions. I was, at the time, basically nobody. I was unemployed for almost a year by that point. I didn't really have much to do, and I didn't have much to lose. I was like, "Well, it seems like there's some really wrong company in the world and I might as well as an unemployed 22-year-old just do something about this and other people, they're maybe a professor and they can't afford to first spend a month full-time writing this and then deal with the fallout of maybe their department getting less funding next year or them personally getting a negative review from reviewers too, and no longer getting published very well or all kinds of things like this."</p><p>I think that my friends often tell me to be more friendly on Twitter and to not call out bullshit. I care about my friends and I do try my best and often they make me not call out BS that I see, and then again if you're in academia and it's like this influence is 10 times stronger, 100 times stronger. I guess I'm personally not very surprised that nothing happens. It seems that this is just how things are, I guess. People in positions to notice that something is not quite right usually are also in positions where they don't want to do it.</p><p><strong>[00:31:26] Dan: </strong>Yes, related question. If you didn't need to sleep yourself and you had a marginal seven hours, let's say, per day over everybody else in the world, where would you spend that time?</p><p><strong>[00:31:37] Alexey: </strong>I have no clue. I'm a big believer in doing rather than talking and, whatever I might say, whether I would spend more time building things or coding or reading or writing or learning, I have no clue. Maybe yes, maybe I would spend seven hours today on YouTube. I think these hypotheticals are not something that we should take as seriously as we often want them to be. We all have goals, resolutions, decisions, and then more often than not, gyms get much less full in February than they were in January, and these extra seven hours without sleep, who knows what would happen.</p><p><strong>[00:32:33] Dan: </strong>Yes, I saw a tweet one time that was basically like, everyone who was aspiring novelist but thought they just have the time learned during COVID that this is not true. [chuckles] It's like, oh, if I just had the time and it's like, okay, here you go. Not so many people used it.</p><p><strong>[00:32:52] Alexey: </strong>I spent a lot of time playing Counter-Strike during COVID.</p><p>[laughter]</p><p><strong>[00:33:01] Dan: </strong>Actually, this is maybe my favorite post of yours, but I thought it was really interesting because it's something that I try to do more often but I think is really hard. You go back and you evaluate some of your previously held beliefs. You're pretty vocal publicly about obviously, sleep not being as important as people say. Also on meditation, you thought that it was not near as useful as people had claimed.</p><p>Then you wrote a post later that came out and was like, well, actually, you weren't like a full, sleep maxer but you said sleeping is more important than I previously believed. Then the same thing on meditation. I'm curious, how did you come about this? I feel like for me if I hold a strong belief and I say it publicly, it can become very hard to change my mind. How did you go about reevaluating this?</p><p><strong>[00:33:52] Alexey: </strong>I guess I just don't care. Rather, I think the answer is probably the same here as why am I working on improving how science works? Why did I write this piece about why we sleep? Why do I write these quotes, and tweets on Twitter where I see BS? I feel like I care about things being true or not, like a lot, just like unhealthily much where like, right, why we sleep made me really angry, it made me really emotionally angry. When I was like, wait, this is not right. It just evoked this emotional response to me.</p><p>When I realized that, I hold a belief and it is not actually right. I can become angry about this. I was like, no, I want to be correct. Truth is important. I guess this just matters more than looking stupid in public. Also, this post, I think people enjoyed reading it. I think it helps other people as well.</p><p><strong>[00:35:18] Dan: </strong>How about specifically on meditation? Because that's something that I myself have never been able to get really into. I feel like it's something where if you think it's not useful and you never go and do it, you don't just magically get sucked into it one day, you have to actually wake up and believe that it might work and then go and practice it. How did you go about changing your mind on that one?</p><p><strong>[00:35:40] Alexey: </strong>Oh, just a couple of people who I trusted and who held similar beliefs before told me that told me that meditation is good. I was like, okay, I'm going to try. Then I tried and I was like, wait, this actually seems useful because I think in the very-- I spent an hour just sitting on the yoga mat thinking, I was like, wait, shit, I haven't really thought just by myself in a very long time. I actually figured a bunch of things out and I resolved some problems.</p><p>Then what happened was I just decided to spend 100 days doing one hour of meditation a day, no matter what. Well, if I just made this decision, I don't know how it would have gone, but I also told about this to this friend very quickly. I was like, okay, this is great. I'm going to do 100 days of this. I put it on a Post-it note on my mirror in the bathroom. I was like, okay, here's 100 days. Here's like when I started, here's where it ends. Then that's how I got into this.</p><p>Actually, funnily enough, as soon as 100 days ended, I didn't really regain the ability to-- I stopped doing this one hour a day religiously, even though it was still very useful and still very helpful. I think it was this beauty and simplicity of this goal of 100 days and this being my biggest personal goal and me telling people that I would do this all contributed. Without this, I probably wouldn't be able to sustain it.</p><p><strong>[00:37:23] Dan: </strong>Yes. Enough people saying like, no, trust me, you got to try it.</p><p><strong>[00:37:28] Alexey: </strong>Enough people who I really trusted and who I saw-- What was important here was that I knew that these people believed the same things about meditation that I believed in the past. I was like, okay, they were where I am right now in the past. I think they're really smart and they changed their mind. There's probably a good chance that I'm just not seeing something that they're seeing and I might as well try.</p><p><strong>[00:37:53] Dan: </strong>Do you think that scientists or researchers benefit from a belief in God?</p><p><strong>[00:37:59] Alexey: </strong>What do we benefit here?</p><p><strong>[00:38:01] Dan: </strong>Do you think it makes them a better scientist or researcher?</p><p><strong>[00:38:04] Alexey: </strong>Yes. My impression is that it seems that there is this very deep need in people to believe in something beyond themselves. If they lose it, it's difficult for them to stay true to themselves and stay true to the world and to retain this idealism because everything becomes random. Science being the pursuit of truth and the loss of nature and reality. I think if you believe that ultimately everything is random and everything is without purpose, then I think you're not going to be doing science as well as you would if you thought that actually there is meaning and there is something beyond just the papers that we publish.</p><p>Just beyond these arbitrary rules or arbitrary phenomena that we discover that there is something beyond that. I don't know if it's true, but it's my suspicion that there is just something about this. It's very difficult for people to live without the belief in something beyond themselves. It really breaks them. Well, a lot of them end up with utilitarianism or communism or all these kinds of replacements. You just start making up data and try to get as famous as you can.</p><p>When people do take God really seriously and they do take these absolute moral rules seriously, they do not break them or they do their best not to break them. When they don't have this high authority when everything is random and arbitrary, then--</p><p><strong>[00:40:03] Dan: </strong>You wrote a post in 2019 called <em>Why You Should Start a Blog</em>, and you're convincing people, "Hey, I started a blog, here's all the benefits that I got from it." Now blogs are much more saturated. You have Substack, there's many, many more than there were in 2019. It seems like just the number of blogs on an absolute basis is much, much greater. Perhaps the likelihood of being found or getting read goes down a little bit. My question to you is, do you think that in the world of today, you would give that advice as strongly as you would've back in 2019?</p><p><strong>[00:40:41] Alexey: </strong>I'm not sure I know many great blogs, honestly. Yes, there are many blogs, but there is always been many blogs. There used to be a LiveJournal, and everyone had a LiveJournal, or people used to write on their Facebook walls a lot. I think that before there was what was called Blogger platform, if I remember correctly, I think. There have always been lots of blogs. There's always been a few really great ones. I don't think this really changed. I think the returns to creating a really great blog are as high, if not higher than they used to be honest.</p><p><strong>[00:41:21] Dan: </strong>Got it. Question on video games. Do you think that they're net good or net bad for society?</p><p><strong>[00:41:27] Alexey: </strong>I have no clue.</p><p><strong>[00:41:28] Dan: </strong>No clue. Do you think you can learn from video games for some people?</p><p><strong>[00:41:32] Alexey: </strong>Definitely can be net bad for some people, can definitely be net good for other people. For me, I don't know what the answer is to this question, honestly. I guess you're alluding to the fact that I spent many years being related to video games being really depressed and suicidal. Is it because of video games or was it because of the circumstances of my life and video games were just a thing that was available? Maybe without video games, I would've been doing hard drugs or something instead. I'm not sure.</p><p>Maybe I would've become an alcoholic if I didn't have video games. Thinking in counterfactuals again, this is exactly why utilitarianism I think is much more difficult than people think. It's very easy for me to just say that, "Oh, no, definitely video games are bad. Everyone wastes so much time. I spent years addicted to them. I spent COVID playing Counter-Strike instead of doing anything useful, but then who knows?</p><p>Maybe the only reason I'm not dead from drinking way too much alcohol or the only reason I'm not doing heroin right now is because I had video games that helped me to cope with what was surrounding me at the time. Honestly, I have very little idea whether video games are net good or net bad.</p><p><strong>[00:42:49] Dan: </strong>That's interesting. Another post that actually I like to reference a lot is the one that you have called the importance of increasing morale. What's the most common thing you do to increase your morale? You give a list on the blog of like 100 things basically that you could do, but I'm curious if there's anyone when you wake up every day that you go to</p><p><strong>[00:43:09] Alexey: </strong>Going outside.</p><p><strong>[00:43:10] Dan: </strong>Going outside.</p><p><strong>[00:43:11] Alexey: </strong>Yes. I think going outside is the easiest thing. The thing that's pretty much never fails, I think. Going outside. I don't want to otherwise, there's a few more, but just going outside always works very easy to do.</p><p><strong>[00:43:30] Dan: </strong>I've lived in situations where I have a gym in my house or in an apartment building, or, I can go to a gym that's 5 to 10-minute walk away. On the surface, it seems nicer to have the gym in your building. I typically go in the mornings, but I have found that I actually like a lot better the walk in the morning. Just getting outside the first thing in the morning is incredibly helpful. I resonate with that quite a bit.</p><p><strong>[00:43:57] Alexey: </strong>I think people really underappreciate this one actually that the impact of DoorDash or any kind of delivery or any remote work or internet itself, or even us doing this interview. This interview would've been even more fun if it was in person. We're not doing it in person because we can do it over video and I'm not going outside, you're not going outside, we're both more depressed than we could've been otherwise.</p><p>Everyone else who could've gone outside and attended an event with this interview is not going outside, they will watch this in their home instead and they will probably order delivery to their home instead of going outside and going to a grocery store. It's amazing to me how many of our technological innovations are literally just making us less likely to do the one thing that's most likely to make us less depressed. Even vaping, [crosstalk]</p><p><strong>[00:45:05] Dan: </strong>Yes, you have to go outside to smoke a cigarette.</p><p>[laughter]</p><p>That's actually a really good point. It's just maximum laziness. You could just do it on your couch and the house doesn't smell. What have you learned from David Goggins?</p><p><strong>[00:45:25] Alexey: </strong>Honestly, the biggest thing I learned is that if you are doing something really physical and it doesn't require much brain power, then it's very easy to use this extra extra brain power to make yourself do whatever this physically labor you're doing better. If you're running a really long distance, then you don't have to think about anything really. You can talk to yourself and you can motivate yourself and you can imagine all kind of things. This doesn't really apply to mental work where you want 100% of your brain power doing already and you don't have this 80% of your brain spare that you can use to motivate yourself. Honestly, that may be the biggest one.</p><p>I'm a huge fan of David Goggins. I read his book way too many times. I think I read it three times over just-- I first read it and then I was like, "Wait, this is too good." I read it again immediately and immediately again, this never happened to me. The storytelling is incredible. The tricks and the strategies that he offers are incredible. His personal rise and fall and rise and fall, these are all incredible. In retrospect, I think they fit his lifestyle and the things that he does much better than the life of most of us probably.</p><p>The lessons unfortunately generalize quite a bit less. One thing that's I think is relevant, I'm not sure who I heard this from, maybe it was just a tweet, but it was something along the lines of, "Take advice from people who you want to be, or take advice from people who are in a position in life where you want to get to or something." I think at some point I realized, wait, David Goggins is an amazing person, but I am not personally really interested in breaking world records for running or doing pull-ups.</p><p>I don't think I'm going to be a motivational speaker or something like this. Whatever the lessons and strategies that he used to get where he's at, however amazing all of this is, is probably less useful to me than I would've hoped.</p><p><strong>[00:48:13] Dan: </strong>I guess if you're just really struggling over VS Code, the stop being a bitch is not as good of advice as it is if you're running an ultra-marathon a little more useful.</p><p><strong>[00:48:23] Alexey: </strong>Yes, or singing a song when you write. It's such an amazing strategy where I think they were carrying the boats and everyone was almost dying from tiredness and lack of sleep and just total muscle soreness. Then David was like, "Wait, why don't we sing a song from a war movie?" I don't remember that's really motivational, really up uplifting. They started singing and they were like, "Okay, we're going to make it." They made.</p><p>It's so amazing except you can't really do it when you're struggling debugging VS Code because it actually takes brain power that you are already using. You're going to be worse. It's very, very, very sad. Very unfortunate. I really really admire David Goggins. Well, if I decide to do a pull-up world record at some point, I'm going to go back and read all his stuff with double the passion.</p><p><strong>[00:49:24] Dan: </strong>That brings up a good point too because you do have a post on advice. The Post basically says that giving advice is really difficult because all people are different and you just alluded to this, where it's like David Goggins is maybe achieving a different goal than you might be at a certain point in time. I'm curious, what single piece of advice has been most influential on you, and who did it come from?</p><p><strong>[00:49:48] Alexey: </strong>I'm not sure I can identify one. Like Tyler Cohen, Sam Altman, Patrick Collison, Ned Friedman have been huge influences on me. Whether it's because of following advice or thinking about advice and decided not to follow it, and instead do something totally different. I am not sure which one is the bigger instance honestly.</p><p><strong>[00:50:18] Dan: </strong>Yes. Totally fair.</p><p><strong>[00:50:19] Alexey: </strong>I think the thing about advice is, again, at the end of the day, giving advice is not the same as living advice. Giving advice without living it in a way is almost like-- In economics, there's this concept of revealed preference where you people say things, and then people do things, and often the things they say and do are very different. Giving advice is this type of, saying things and not doing things, and then the way you do things, how you do them, what you do then matters much more as advice than just giving advice.</p><p><strong>[00:51:05] Dan: </strong>I wanted to ask you about your post where you have your favorite media, so you've got movies, TV shows, books, podcasts. I noted that you saw or you list <em>True Detective</em> season 2 on there. I remember when <em>True Detective</em> season 2 came out, the critical reception was basically this was the greatest TV show of all time. Season 1 was revolutionary, and season 2 is, it bombed. People did not generally give good reviews. I thought this is a little bit contrarian that you liked it. I'm curious what is it about season 2 of <em>True Detective</em> that put it up on your list of favorites?</p><p><strong>[00:51:43] Alexey: </strong>I feel like it's again, it's almost like season 1 is Tolstoy, season 2 is the TFQ for me.</p><p><strong>[00:51:49] Dan: </strong>Okay. This is good.</p><p><strong>[00:51:50] Alexey: </strong>Season 2 is honest in a way that season 1 is not. Season 1 is almost like-- I feel like it's something about maybe the creator of the show, needed to prove himself and he exaggerated everything. He made this main character, made him a connoisseur, this nihilistic to this just absurd caricature, I think. In general, it's almost proving to the people that you can create this amazing TV show, I feel like with these caricatures of thoughts and ideas and characters. Season 2 for me, it just felt much more honest than season 1.</p><p><strong>[00:52:41] Dan: </strong>Got it. What was honest about it? Have you seen it recently enough to recall specifics?</p><p><strong>[00:52:47] Alexey: </strong>Yes. It's more difficult for me to talk about season 1 because I only watched it once. Then it was like, well, this wasn't all that good and then season 2 was just incredible. Let me try to think. I guess it's something about complexity. Again, I feel like we're talking about many different things, but in a way we're still talking about the same thing, this simplicity and beauty versus complexity and ugliness in a way.</p><p>The real world is, we really want it to be beautiful and simple and unfortunately, it's very complicated and ugly and difficult in all kinds of ways. Season 2 I think just shows this much more like things are complicated. Everything up until the end of the show, every character, there is something good about them in a way, something bad about them. They have reasons for doing things.</p><p>Sometimes they do things without reasons. Sometimes it's because they want to be loved, because they want to be rich, because they want to be powerful, because they are trying to escape something, and and this complexity, again and I think there is this beauty in complexity and the ugliness and lack of clear cut goodness and badness of characters. Yes, it just seemed that like season 2 was just very unique to me as a TV show for being able to show all of this.</p><p>Again, to be honest about the difficulties of everything, and the complexities of everything. If I remember correctly, up until the last episode maybe or last two episodes, we don't even know if-- geez, I don't remember any of the names of characters, but there is this mafiosi leader presumably who gave the corrupt cop the wrong person as the person who attacked the cop's wife. I think until then you don't know anything. You don't really know. Is this true what he himself confused, was it deliberate, why did she do it?</p><p>Is he a bad person, is he a good person, we don't know. I think that's how the world works. You just don't really know and it's difficult and complicated. Maybe that's precisely why critics don't like it. As a critic you want to be able to pronounce a clear judgment or something because if you don't, people won't read you and they won't enjoy your reviews. Then you present them with a really difficult work that you can't really form a really clean judgment and say things that are very simple. Maybe it's not unexpected that critics don't like this kind of art too very much.</p><p><strong>[00:56:06] Dan: </strong>One thing I wanted to ask you about is a question on one of your blog posts and says that there's alpha and low status. Why do you think that is?</p><p><strong>[00:56:15] Alexey: </strong>I think it's something about status being a reward, high-status profession, like banking, consulting, software engineering. Well, I guess in some circles, it's high status. In other circles, it's low status. High status, in a way, is a reward for not deviating from the past and doing what is expected and, just continuing to do what whatever you were doing and starting to make a lot of money and starting to be respected, raising up the ranks. As you rise up the ranks, you gain status.</p><p>I think what usually happens if you do this too religiously is you just keep doing the same thing. Now, the things that we start doing when we're 18 or 20, I didn't have any clue what I was doing when I was 18 or 20, or honestly, I don't know if I have any clue what I'm doing today still, but I definitely have more clue than I had when I was 18. If you keep rising in status, continuously as you progress through life, I think there is a very good chance that you're basically raising up in status in the wrong direction way.</p><p>It's very difficult to stop doing this. It's very difficult to just be unemployed and think about life for a year. It's very depressing and it's terrifying in a way and I think very few people do this but it's kind of a way to figure out the right direction I think to go in. It's a very low-status thing to do. All of your friends will probably think that you're a loser now. You're unemployed. You spent three months playing video games and not doing anything, and you're depressed. Then you're running out of your savings, and then, it's now difficult to get a job because you have a year gap in your resume, and it's just really difficult to do.</p><p><strong>[00:58:31] Dan: </strong>If someone told you they were going on sabbatical, how would you recommend? They're going to do a year sabbatical, what would be your recommendation for that period of time?</p><p><strong>[00:58:39] Alexey: </strong>I would recommend solo travel. Not sure what the right amount is, but I think the reason for this is, if you want to get unstuck from your existing patterns and expectations and all that stuff, you want to get out of the physical space where you are and I ideally, you want to spend time by yourself and think about things and figure things out for yourself, solo travel is the thing that combines everything. You don't have to do it, I think but, it's like going outside. Just go outside, but somewhere very far.</p><p>Probably don't spend all of your time by yourself. Because if you spend a year being by yourself and don't talk to people, you might kill yourself, but it's coming along.</p><p><strong>[00:59:38] Dan: </strong>What's next for the Guzey blog? Do you plan to continue posting? Do you have any intentions to stop?</p><p><strong>[00:59:43] Alexey: </strong>We'll see. Let's make this a mystery.</p><p><strong>[00:59:48] Dan: </strong>Well, Alexey, that's a great one to end on. Thank you so much for your time today. It was great talking with you.</p><p><strong>[00:59:53] Alexey: </strong>Thanks for taking the time as well.</p>]]></content:encoded></item><item><title><![CDATA[Noah Smith]]></title><description><![CDATA[Keynes, Piketty, Africa, Japan, Javier Milei, anime, and more]]></description><link>https://www.danschulz.co/p/noah-smith</link><guid isPermaLink="false">https://www.danschulz.co/p/noah-smith</guid><dc:creator><![CDATA[Dan]]></dc:creator><pubDate>Tue, 30 Jan 2024 14:24:23 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/141187946/070f6e93fbfb68efa4f9321e63208029.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><a href="https://www.noahpinion.blog/">Noah Smith</a> is one of the most prolific and thought provoking econ bloggers writing today. We covered a lot of ground in this episode including Noah&#8217;s intellectual influences, how much room macroeconomics has for innovation, Keynes&#8217; idea of the 15 hour work week, Piketty&#8217;s theory of inequality, Africa&#8217;s economic development, why you should watch anime, and much more. Timestamps and links are below.</p><div id="youtube2-7wi1ZWbZBOc" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;7wi1ZWbZBOc&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/7wi1ZWbZBOc?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8a7fb76416a5b808186823211c&quot;,&quot;title&quot;:&quot;Noah Smith&quot;,&quot;subtitle&quot;:&quot;Dan Schulz&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/3gySmZWDXINAwJuraZM2Bg&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/3gySmZWDXINAwJuraZM2Bg" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/undertone/id1693303954?i=1000643514905&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000643514905.jpg&quot;,&quot;title&quot;:&quot;Noah Smith&quot;,&quot;podcastTitle&quot;:&quot;Undertone&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:4698000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/noah-smith/id1693303954?i=1000643514905&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2024-01-30T12:00:00Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/undertone/id1693303954?i=1000643514905" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><h3><strong>Timestamps</strong></h3><p>(0:00:00) Intro</p><p>(0:00:30) Noah&#8217;s intellectual influences</p><p>(0:10:20) Javier Miles</p><p>(0:11:43) Which macroeconomist would Noah give the highest grade?</p><p>(0:13:47) Defining mainstream macro in 2024</p><p>(0:18:22) Innovation in macroeconomics</p><p>(0:21:48) NGDP targeting</p><p>(0:24:35) Keynes and the 15 hour work week</p><p>(0:30:25) Is inequality a problem?</p><p>(0:34:16) Thomas Piketty and r&gt;g</p><p>(0:39:37) Africa&#8217;s economic development</p><p>(0:44:47) What has Noah changed his mind about recently?</p><p>(0:47:59) Noah&#8217;s prolific output</p><p>(0:52:25) Noah&#8217;s goals for the blog</p><p>(0:53:41) Anime</p><p>(0:55:46) Japanese policy in the US</p><p>(0:57:56) Inheritance tax</p><p>(1:04:38) Japanese zoning policy</p><p>(1:06:12) When is technology dangerous?</p><h3><strong>Links</strong></h3><ul><li><p><a href="https://twitter.com/tylercowen">&#8288;</a><a href="https://twitter.com/Noahpinion">Follow Noah on X</a></p></li><li><p><a href="https://marginalrevolution.com/">&#8288;</a><a href="https://www.noahpinion.blog/">Noah&#8217;s Substack</a></p></li><li><p><a href="https://conversationswithtyler.com/">&#8288;</a><a href="https://podcasts.apple.com/no/podcast/econ-102-with-noah-smith-and-erik-torenberg/id1696419056">Noah&#8217;s podcast &#8220;Econ 102&#8221;</a></p></li><li><p><a href="https://www.newyorker.com/culture/cultural-comment/the-unmasking-of-elena-ferrante">&#8288;&#8288;</a><a href="https://podcasts.apple.com/us/podcast/hexapodia-is-the-key-insight-by-noah-smith-brad-delong/id1552990332">Noah&#8217;s podcast with Brad DeLong</a></p></li><li><p><a href="https://www.newyorker.com/magazine/2023/11/27/joyce-carol-oates-profile">&#8288;</a>Noah&#8217;s post, &#8220;<a href="https://www.noahpinion.blog/p/how-are-milton-friedmans-ideas-holding">How are Milton Friedman's ideas holding up?</a>&#8221;</p></li><li><p>Noah&#8217;s post, &#8220;<a href="https://www.noahpinion.blog/p/all-futurism-is-afrofuturism">All futurism is Afrofuturism</a>&#8221;</p></li><li><p>Noah&#8217;s post, &#8220;<a href="https://www.noahpinion.blog/p/heterodox-vs-mainstream-macroeconomics">Heterodox vs. mainstream macroeconomics</a>&#8221;</p></li><li><p>Noah&#8217;s post, &#8220;<a href="https://www.noahpinion.blog/p/techno-optimism-for-2024">Techno-optimism for 2024</a>&#8221;</p></li><li><p><a href="https://twitter.com/dnschlz">&#8288;Follow Dan on X&#8288;</a></p></li><li><p>Subscribe to Undertone on&nbsp;<a href="https://www.youtube.com/channel/UCtRjQ8EA3Via7KXzv9F173Q">&#8288;&#8288;YouTube&#8288;&#8288;</a>,&nbsp;<a href="https://podcasts.apple.com/us/podcast/the-world-of-yesterday/id1693303954">&#8288;&#8288;Apple&#8288;&#8288;</a>,&nbsp;<a href="https://open.spotify.com/show/59YkrYwjAgiKAVMNGWPaLE">&#8288;&#8288;Spotify&#8288;&#8288;</a>, or&nbsp;<a href="https://www.danschulz.co/">&#8288;&#8288;Substack&#8288;&#8288;</a></p></li><li><p>Watch or listen to previous episodes of Undertone with <a href="https://www.danschulz.co/p/tyler-cowen">Tyler Cowen</a>, <a href="https://www.danschulz.co/p/vitalik-buterin">Vitalik Buterin</a>, <a href="https://www.danschulz.co/p/scott-sumner">Scott Sumner</a>, <a href="https://www.danschulz.co/p/samo-burja">Samo Burja</a>, <a href="https://www.danschulz.co/p/3-steve-hsu">Steve Hsu</a>, and <a href="https://www.danschulz.co/">more</a>.</p></li><li><p>I love hearing from listeners. Email me anytime at dan@danschulz.co</p></li><li><p>Share anonymous feedback on the podcast: <a href="https://forms.gle/w7LXMCXhdJ5Q2eB9A">&#8288;https://forms.gle/w7LXMCXhdJ5Q2eB9A</a></p></li></ul>]]></content:encoded></item><item><title><![CDATA[Tyler Cowen]]></title><description><![CDATA[Economics, philosophy, religion, literature, and much more]]></description><link>https://www.danschulz.co/p/tyler-cowen</link><guid isPermaLink="false">https://www.danschulz.co/p/tyler-cowen</guid><dc:creator><![CDATA[Dan]]></dc:creator><pubDate>Mon, 15 Jan 2024 23:55:06 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/140719816/cfde34316119871caa242db5fb9f5bbc.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<div id="youtube2-p-hx8Z8clI4" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;p-hx8Z8clI4&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/p-hx8Z8clI4?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8a006fe1998883d3d759aa5fc5&quot;,&quot;title&quot;:&quot;Tyler Cowen&quot;,&quot;subtitle&quot;:&quot;Dan Schulz&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/6D2fHnuj7WphBpunxfp6mB&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/6D2fHnuj7WphBpunxfp6mB" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/undertone/id1693303954?i=1000641767754&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000641767754.jpg&quot;,&quot;title&quot;:&quot;Tyler Cowen&quot;,&quot;podcastTitle&quot;:&quot;Undertone&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:4538000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/tyler-cowen/id1693303954?i=1000641767754&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2024-01-15T22:20:04Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/undertone/id1693303954?i=1000641767754" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><h3>Timestamps</h3><p>(0:00:00) Intro</p><p>(0:00:28) Identifying talent on Tyler&#8217;s podcast</p><p>(0:03:48) Why are follow up questions overrated?</p><p>(0:04:21) Tyler&#8217;s preferred guest career stage</p><p>(0:05:16) Optimal frequency for recording podcasts</p><p>(0:05:50) Tyler&#8217;s podcast prep</p><p>(0:07:39) Importance of in-person episodes</p><p>(0:09:56) What would Tyler ask Paul McCartney</p><p>(0:10:16) When to read literature in translation</p><p>(0:13:31) Tyler on Dostoyevsky and Nietzsche</p><p>(0:15:39) Elena Ferrante and the importance of pseudonyms</p><p>(0:17:17) Misreading literature</p><p>(0:17:56) Will literature get better in the next 10 years?</p><p>(0:19:21) Watching complex film</p><p>(0:22:14) Enjoying art</p><p>(0:23:08) Jonathan Swift and Peter Thiel</p><p>(0:24:45) Crude comedy</p><p>(0:25:28) Can we trust elites in the internet age?</p><p>(0:26:44) Generational theories of politics</p><p>(0:27:57) Religious thinkers and Tyler&#8217;s implicit theology</p><p>(0:29:33) What did Tyler get from Plato at a young age?</p><p>(0:30:51) Why are Shakespeare, Proust, and Melville in a league of their own?</p><p>(0:31:32) Fernando Pessoa</p><p>(0:31:59) Implications of demand sloping down</p><p>(0:33:22) Innovation in governance structures</p><p>(0:36:14) Should business leaders study the greats?</p><p>(0:38:36) GDP growth from today&#8217;s AI models</p><p>(0:40:35) LessWrong and worries about AI</p><p>(0:41:46) Feminized societies in times of chaos</p><p>(0:43:17) Fernand Braudel</p><p>(0:43:52) Best argument against NGDP targeting</p><p>(0:47:06) AI business models</p><p>(0:48:19) What is a bubble?</p><p>(0:49:33) Where are there too few Emergent Ventures applications?</p><p>(0:50:09) Unsolved problems in economics</p><p>(0:51:27) Immigration policy</p><p>(0:52:56) If Tyler wasn&#8217;t an economist</p><p>(0:53:50) Would Tyler go to Burning Man?</p><p>(0:54:33) The MR Universe</p><p>(0:55:32) Tyler&#8217;s equanimity</p><p>(0:57:15) Tyrone</p><p>(0:59:27) Joyce Carol Oates and Susan Sontag</p><p>(1:04:45) Wasting time</p><p>(1:04:54) Bach&#8217;s productivity</p><p>(1:05:29) Competitors to MR</p><p>(1:07:51) New Jersey and The Sopranos</p><p>(1:11:56) Rapid fire - what Tyler learned from different people</p><h3>Links</h3><p>- <a href="https://twitter.com/tylercowen">Follow Tyler on X</a></p><p>- <a href="https://marginalrevolution.com/">Tyler's blog</a></p><p>- <a href="https://conversationswithtyler.com/">Tyler's podcast</a></p><p>- <a href="https://www.newyorker.com/culture/cultural-comment/the-unmasking-of-elena-ferrante">&#8288;Unmasking Elena Ferrante&#8288;</a></p><p>- <a href="https://www.newyorker.com/magazine/2023/11/27/joyce-carol-oates-profile">New Yorker on Joyce Carol Oates</a></p><p>- <a href="https://twitter.com/dnschlz">Follow Dan on X</a></p><p>- Subscribe to Undertone on&nbsp;<a href="https://www.youtube.com/channel/UCtRjQ8EA3Via7KXzv9F173Q">&#8288;YouTube&#8288;</a>,&nbsp;<a href="https://podcasts.apple.com/us/podcast/the-world-of-yesterday/id1693303954">&#8288;Apple&#8288;</a>,&nbsp;<a href="https://open.spotify.com/show/59YkrYwjAgiKAVMNGWPaLE">&#8288;Spotify&#8288;</a>, or&nbsp;<a href="https://www.danschulz.co/">&#8288;Substack&#8288;</a></p><p>- Share anonymous feedback on the podcast: <a href="https://forms.gle/w7LXMCXhdJ5Q2eB9A">https://forms.gle/w7LXMCXhdJ5Q2eB9A</a></p><h3>Transcript</h3><p><strong>[00:00:12] Dan: </strong>Okay, my guest today is Tyler Cowen. He's a professor of economics at George Mason University, blogger at marginalrevolution.com, host of the podcast <em>Conversations with Tyler, </em>responder to email, and one of my intellectual heroes. Tyler, welcome.</p><p><strong>[00:00:26] Tyler Cowen: </strong>Dan, thank you very much.</p><p><strong>[00:00:28] Dan: </strong>Okay, so let's talk a little bit about your podcast. You've said before that the Straussian reading of <em>Conversations with Tyler</em> is that the entire conversation is really just a talent evaluation. How confidently do you feel you can make a talent assessment after you have an episode of with just one guest?</p><p><strong>[00:00:42] Tyler: </strong>Well, virtually everyone who's on the podcast is already highly successful. In that sense, it's easy. You've decided just to pick the winners. Now, there's another question at stake, which is, how good will they be in a podcast? Someone can be mega successful, but completely boring on a podcast, that's a different kind of talent assessment. I would just ask the listeners to make their own judgment. I don't have really other reasons for having people on episodes, other than that I think they will be good. It's not like, "Oh, I had to have my best friend on or my cousin on." If there's a bad episode, I screwed up evaluating talent is the simple way to put it.</p><p><strong>[00:01:24] Dan: </strong>I guess another way to phrase this is when you are evaluating talent, just for a hiring decision more explicitly, how different do those feel from your conversations on the podcast?</p><p><strong>[00:01:33] Tyler: </strong>Oh, they're totally different. It's hard to have a very high hit rate on evaluating talent. These are typically markets where people are doing startups or they want to be important public intellectuals. They're a bit akin to power law or winner take all markets. If their success rate is, say, 2%, your ability to pick the winners, it just can't be that high. If you can build a portfolio, where say you're supporting 100 people, and 15 of them do very well, that's a very high hit rate. My guess would be, if you're a very good interviewer, and the people are operating in power law markets, you're doing quite well if you can manage 10 to 15%.</p><p><strong>[00:02:20] Dan: </strong>Got it. I'm curious, when you have a guest in a return episode, how much marginal insight do you feel like you gain from meeting them a second time, other than maybe like you talked about their new book or another area of their work, but just like getting to know the person, what's the marginal improvement when you have a second episode?</p><p><strong>[00:02:36] Tyler: </strong>It's quite slim. I don't know if you heard my episode with Paul Graham, but he said when he does interviews, typically he's making up his mind within seven or eight minutes, and often within two minutes. Now you can learn factual things about the person over many hours, years, decades, but your fundamental sense of them, it takes a very long time for it to change after, certainly after 20 minutes. As Paul noted, it can be less than 10 minutes.</p><p><strong>[00:03:04] Dan: </strong>How do you choose what order to ask your questions in? You don't tend to start with softballs.</p><p><strong>[00:03:09] Tyler: </strong>Depends if the person knows me or not or knows the podcast. Sometimes you want to signal that it's super serious. Other times, you just want to signal that it's friendly. When I had Vishy Anand on, he doesn't know anything about my worlds. What I did was I set up two chess boards in the room with important positions from earlier in his career, one of them was 30 years ago, and he saw the positions. Of course, he recognized them immediately. That put him at ease, and he thought, "Well, this will be at a high level, and this is someone who appreciates me." It depends on the guest what you signal at the very beginning.</p><p><strong>[00:03:48] Dan: </strong>Yes. Why our follow-up questions overrated?</p><p><strong>[00:03:52] Tyler: </strong>Often people just repeat the same thing they've said. Two-thirds of it is going through what they had said a moment ago, so why do too much of that? Sometimes true clarification is needed, or a person has said something you think is wrong, and you want to see if they can defend it, but most of the time, move on to something else. Do you have a follow up to that?</p><p><strong>[00:04:15] Dan: </strong>[chuckles] I have another question. That's a response.</p><p><strong>[00:04:18] Tyler: </strong>I could repeat my same answer, then we could show it's true.</p><p><strong>[00:04:21] Dan: </strong>Yes, exactly, okay. What do you think the best career stage for someone to be at is to have them on as a guest? You can imagine you have people that are very early in their career, maybe like a Vitalik, you have Paul Graham, who's towards the end, you have other people who are operating in the middle of their career? Do you have a stage or any themes that you prefer to have somebody as far as how far they are along in their career?</p><p><strong>[00:04:44] Tyler: </strong>I'm not sure it matters, and I'm not sure if Vitalik is well thought of as early in his career. I think of him as mid-career, even though he's super young because he did so much so early to his credit.</p><p><strong>[00:04:55] Dan: </strong>Fair enough.</p><p><strong>[00:04:56] Tyler: </strong>The person's basic temperament, are they willing to be open and somewhat controversial and engage doesn't change that much across the person's life. Maybe some people can be too old, or others too young. My intuition is mid-career is fast and there's a big wide long band where it's not changing so much.</p><p><strong>[00:05:16] Dan: </strong>What's the optimal episode frequency? Of course, the more episodes you do, the more guests you get on, but then your less under prep for each guest.</p><p><strong>[00:05:24] Tyler: </strong>Every day is optimal. Now, I can't manage that, given the rest of my life, but it is my aspiration to do what I hope is a good podcast every single day. In essence, you do a lot of prep in common, you would have people with common topics or common backgrounds. Doing one podcast, in fact, would help you prep for another one. It's always more frequent than whatever someone is doing.</p><p><strong>[00:05:51] Dan: </strong>Say you're doing like two historians back to back. Today, what does the prep look like in terms of just like raw hours? Let's say that you haven't had a historian on yet, you've got a historian that's coming on next month, how many hours do you spend prepping for the conversation?</p><p><strong>[00:06:07] Tyler: </strong>Well, first, I will try to pick someone in a field where I've already read a lot throughout my life. That's not always possible.</p><p><strong>[00:06:11] Dan: </strong>Which is everywhere.</p><p><strong>[00:06:13] Tyler: </strong>Well, not everywhere, but if it's the Byzantine Empire, there's someone with a new book out about the Byzantine Empire, I'm thinking of having them on, but it's an area where my background is pretty weak. I'm thinking, do I have enough time to prep for this person? In a sense, you need your whole life, but for someone like Foster, the Irish historian I had on, I spent four or five months reading Irish history pretty intensively. That was rewarding, but you can't do that for every guest. It depends who else you're having on, and you always prefer to have some guests that are relatively easy preps. For me, those are the economists, but it's not that I want to slack, it's that I want to put more total prep time in.</p><p><strong>[00:06:58] Dan: </strong>I see. What do you think the optimal popularity is for the show? Why not put even more effort into marketing?</p><p><strong>[00:07:06] Tyler: </strong>I don't know that marketing helps much for podcast, and the marginal listeners we would get I suspect would be lower quality ones than word-of-mouth listeners or people who know me. We market it a fair degree. We have swag, and we have a Twitter feed, and we send people emails. We don't buy ads on cable TV. I don't think this audience is so large anyway. Say you could pull in another thousand listeners by putting ads on TV, who cares? You're not doing it to have another thousand listeners.</p><p><strong>[00:07:40] Dan: </strong>How important is an in-person episode? I suspect you're going to say very important, but especially if you were to aspire to do one episode per day, it seems like that'd be implausible. How do you determine whether or not it's worth going out of your way to make a conversation in person?</p><p><strong>[00:07:55] Tyler: </strong>Well, before the pandemic, I thought it was incredibly important to have them be in person. Then, of course, all of a sudden, none of them were in person. We had feedback from listeners, and basically people said, "These are not worse," we asked a bunch of people, so I had to revise my view. The main reason often I want to do it in person is so I can meet the person, not to make the podcast better.</p><p>That said, there's some minority of guests, that's probably below or close to 10%, where you can put them at ease by being there with them and chatting a bit in advance, and those are better in person, but most of them aren't, but I've learned, and I was wrong at first.</p><p><strong>[00:08:35] Dan: </strong>Interesting. How much better do you feel you've gotten at interviewing for the podcasts over time? Should we expect that the recent episodes of <em>Conversations with Tyler</em> are going to be a better you than the early ones?</p><p><strong>[00:08:46] Tyler: </strong>No, I don't think so. There was a streak we had, one of them was Richard Prum, the ornithologist. There was the woman who does archaeology using satellite data. There was like five or six in a row, maybe a year and a half ago, that's like our best streak of episodes. Maybe it's just going to get a little worse. Katherine Rundell was great, Lazarus Lake, but no, I don't think I'm getting better. On average, I should expect to get a little worse. Too bad. Sorry.</p><p><strong>[00:09:15] Dan: </strong>That's surprising to me. You have so much focus on practicing at what you do and it's self-improvement, but you don't feel like you're getting better at the podcast over time, why?</p><p><strong>[00:09:25] Tyler: </strong>I just think there's an asymptote to a lot of processes. If I'm preparing for, say, a historian for four months, well, I could prepare for seven months. It would only be very slightly better, I think, and my basic skills, how well I speak or how well I think on my feet, I don't think they're getting better. There might be some areas where I'm in a better position to get desired guests, and that would make the podcast better. That's different from me getting better.</p><p><strong>[00:09:56] Dan: </strong>If you're able to get Paul McCartney on, what's like the most important number one question you would want to ask him?</p><p><strong>[00:10:01] Tyler: </strong>I would look at his early to mid solo career and dig deeply into b-sides, outtakes, different things he did in the studio and ask him about the details and see what he has to say, and not really talk about The Beatles very much at all.</p><p><strong>[00:10:17] Dan: </strong>Switching gears a little bit off the podcast, one thing I noticed, it seems like you're often pretty critical of translated literature, but at the same time, you're a really big fan of it. You love Knausgaard, Ferrante, you talk about Houellebecq, and obviously much of Harold Bloom's <em>Western Canon</em> is all in non-English. At the margin, what's the right way for a reader to think about whether or not to read a translated work of fiction?</p><p><strong>[00:10:37] Tyler: </strong>Well, the classics you should read anyway, but just realize you're probably getting something much worse. It's not true for every translation, but many things are just extremely conceptually in principle difficult to translate. It's one reason to learn a language as you get a better sense of what you're missing once you've seen how good or bad a translation can be, but you should never not read an important work because it's in translation. You mentioned Houellebecq. I actually read that in German first because-</p><p><strong>[00:11:06] Dan: </strong>Oh, interesting.</p><p><strong>[00:11:07] Tyler: </strong>-it was out in German before in English, and it's much better in German, I think. There's something about the seriousness of it, of sounding European, German with longer sentences in some ways being closer to French in that regard, and that worked. I don't read French and then I read it later in English and it felt a little more superficial, but I don't think that's the author's fault.</p><p><strong>[00:11:30] Dan: </strong>What's your benchmark, though, if you don't read French? You could benchmark it against English. How do you know it's a good translation? Is it just the flow of how coherent it sounds or how do you benchmark it?</p><p><strong>[00:11:43] Tyler: </strong>I don't think I always know. Typically, I can ask someone, and I would more or less trust what they had to say, but if it's a work with a very high reputation that I'm not enjoying at all, my first thought is to wonder if the translation isn't at fault because I think the market in classics is pretty efficient. That is everything considered a classic.</p><p>Pretty much all of it is quite high quality or very interesting. It could be the fault is not in the translation, but it's in me.</p><p>I've tried reading <em>Count of Monte Cristo</em> a few times, never enjoyed it. I don't think the fault's in the translation, because I've tried more than one. There's a new supposedly better translation. It's not philosophically deep in a way that might be hard to translate, so it's just like my defect.</p><p><strong>[00:12:28] Dan: </strong>Interesting, because the other day, you also posted an addendum to your best fiction books of the year and you added <em>Pedro Paramo</em> the Spanish novel, and you said previously the translation, you didn't like it, this new one, you gave it high praise. Is this an example of that? You know it's a great work. You didn't like the previous translation and this one was just good enough? Or how did you know this one was good when you were reading it?</p><p><strong>[00:12:51] Tyler: </strong>Well, I had read the original in Spanish, and that took me really a long time, even though it's only 120 pages, and the vocabulary is not hard, but what is going on, there's a paucity of information and you have to piece it together as a reader. That's hard in any language. If it's your third language, which Spanish is for me, it's harder yet.</p><p>The preexisting English translation just had zero humor in it and the Spanish had a lot of humor in it. This new one, I suspect, is as good as it can get. I bet if you asked the translator, he would know it's hard work to translate and that what he did wasn't perfect but was more or less optimized.</p><p><strong>[00:13:32] Dan: </strong>On your year-end best books, another one that I thought actually surprised me was you put the new translation of <em>Brothers Karamazov</em> on there. I was surprised because you actually have talked-- you don't talk much about Dostoevsky at all, and then you actually even have a post where you respond to a reader who is like, "Why don't you talk about Dostoevsky?" You're negative towards him. What do you actually think of him? You were able to put this book on the best of the year? What is your view?</p><p><strong>[00:13:57] Tyler: </strong>When I was in high school, <em>Brothers Karamazov</em> was my favorite novel of all time, and I just loved it. That was actually from the Constance Garnett translation. I don't think Dostoevsky is worse. I think it's that I have very different concerns and what he's obsessed with doesn't register with me really at all. These big, huge questions about God and death and evil and murder, Tolstoy is much more relatable in a way that he wasn't when I was younger.</p><p>I think that's a shift in me, not that I've gotten worse. Those questions to me, though, they just feel played out. Something like Thomas Mann, <em>Magic Mountain</em>, I think that I liked somewhat better when I was younger, but I don't think less of the work. It's just heavier stuff makes more sense in your 20s than in your 60s sometimes, and social nuance you'll find more interesting in your 60s for me. Nietzsche doesn't interest me anymore and he's clearly a brilliant writer, philosopher, but I pick it up and it's a kind of blank.</p><p><strong>[00:15:03] Dan: </strong>I actually had a question on Nietzsche because you do list him. You have a list of authors where you should read all of their work and he's on it, but then you're usually pretty negative on him. In your conversation with Elijah Millgram, you're a scathing critique of him saying that he's just basically an anonymous Twitter poster. [laughs] Do you fit him in the same category as Dostoevsky or how do you think about Nietzsche?</p><p><strong>[00:15:27] Tyler: </strong>He's still very important and was super insightful, but a lot of it's been absorbed. Again, I now have different concerns. Kierkegaard to me is more interesting, Nietzsche less so.</p><p><strong>[00:15:39] Dan: </strong>Okay. There was an article claiming that Domenico Starnone, I'm not sure if I'm pronouncing his name right, but he's a male. The claim was that he is actually Elena Ferrante. Let's just pretend for a second that this is true. My question is do you think the Neapolitan novels would've had the same success under his name? Or how valuable is actually the pseudonym here?</p><p><strong>[00:15:59] Tyler: </strong>First, as a side remark, my suspicion is that it's probably Starnone, but though I don't know that Starnone and his wife co-authored the Ferrante novels rather than just Starnone.</p><p><strong>[00:16:11] Dan: </strong>I've seen that theory as well, yes.</p><p><strong>[00:16:13] Tyler: </strong>His wife is a well-known translator, is very literary, seems to be extraordinarily intelligent. There's a lot in the Ferrante novels where I do truly feel only a woman could have written this. I don't know the actual story, but to address your question, if they came out as either co-authored or under a male name, yes, I think they would've done much, much less well in terms of number of copies sold. I don't know if that was their motive. There are significant downsides to fame. That may have been their motive.</p><p><strong>[00:16:47] Dan: </strong>Should more authors do this, should they create up a pseudonym and try and restart their career?</p><p><strong>[00:16:51] Tyler: </strong>It's hard to keep it secret. Indeed, in this particular case, it didn't remain a secret. Though I think most readers still don't have a sense of it. You have to do a lot of work, and it means you cannot do public appearances to promote your book, which is the main way people promote their book, not to mention podcasts. You're at a big disadvantage if you do not present yourself to the world. I don't think it will be that common.</p><p><strong>[00:17:17] Dan: </strong>Yes. Say I'm like reading an author who many people can interpret as Straussian like Houellebecq, how worried should I actually be about misreading their work or is misreading their work kind of the point?</p><p><strong>[00:17:30] Tyler: </strong>Maybe misreading is the point. I think with Houellebecq, there are multiple readings. When you're asking, "Well, how sympathetic is he toward Islam and Islam in France anyway?" You're not supposed to come away with a simple direct answer. It's all a misreading. None of it's a misreading. The multiple layers are important, and that's fine if you think it's simple, probably you've misread the book. Other than that, give your mind free play.</p><p><strong>[00:17:56] Dan: </strong>The only take that I agree with pretty strongly, which is where you say that a lot of the recent literature is actually not so far behind maybe the 18th or 19th century. You've got Naskar, Bola&#241;o, Sebald, a whole bunch of authors who've actually done really great work. Again, notably a lot in translation, though.</p><p><strong>[00:18:13] Tyler: </strong>Yes.</p><p><strong>[00:18:14] Dan: </strong>My question is do you expect it to keep up in the next decade? One of the confounders here I was thinking about is AI. I guess I won't give you any more perspective on that. I'm just curious for your overall view over the next 10 years.</p><p><strong>[00:18:28] Tyler: </strong>I don't have a concrete prediction, but I don't see a reason why it should slow down, so I'm not pessimistic. Obviously, things come in waves and bulges and there's back and forth of <em>Solenoid</em>, which was translated this year, it's an incredible novel. Why think it has to stop? Now, maybe some longer time horizon people will read AIs.</p><p>There was just a Chinese science fiction story that won some award, and it was written by an AI.</p><p>I think short stories will come way before novels and even then a lot of people will prefer the human product and AI might help you write a great novel in different ways. Again, not a concrete prediction, but no reason to be pessimistic. Readers want it, humans can do it. What's in the way?</p><p><strong>[00:19:18] Dan: </strong>That's great news. Let's get excited.</p><p><strong>[00:19:20] Tyler: </strong>Yes.</p><p><strong>[00:19:21] Dan: </strong>I asked Scott Sumner a similar question, but I really want to get your take on this. Say I'm just a casual movie-goer, and I go see the biggest hits every year, but I haven't really seen too many Hollywood classics and I'm not familiar with someone like Tchaikovsky, but I have the opportunity to go see <em>Memoria</em> in theathers, should I go and what should I do to prepare myself?</p><p><strong>[00:19:40] Tyler: </strong>Oh, of course, you should go. To see it on a small screen is almost worthless. It's worse than reading literature and translation. Some movies you just have to see as they were made.</p><p><strong>[00:19:52] Dan: </strong>Would I find it boring if I'm just a casual movie-goer, I haven't seen a lot of these "Art House" indie movies? How do I prepare myself to not be bored by it?</p><p><strong>[00:20:01] Tyler: </strong>I don't know what you mean by the word casual. Not everyone likes any kind of movie, but I'll just say the first "deep foreign movies" I saw, I loved them immediately. I was at that point in time by definition just starting out. I don't know if you'll like it, but the returns to you trying it are very, very high. Again, you need to be ready to just walk out. Just leave.</p><p>What's the cost, but $14, $15, parking and driving, definitely do it. I don't think these things are hard to understand. It's whether or not you're interested, like am I interested in Nietzsche and Dostoevsky right now? Well, less. That's fine. You might be on that tier with <em>Memoria</em>, but there's going to be something maybe like <em>Godzilla Minus One</em> where, "Oh, wow, this is what I've been waiting for." For me, it's both. Godzilla was good too.</p><p><strong>[00:20:55] Dan: </strong>Fair enough. Actually, I read you as like overall, though, somewhat negative on Hollywood. Maybe you like Godzilla, but it seems like you are generally more bullish on, again, foreign movies and lower-budget indie movies. What's just your overall sentiment on film for the next 10 years, similar question to with books?</p><p><strong>[00:21:12] Tyler: </strong>I suspect it's at a turning point. Putting this year aside, Hollywood, to me, it seemed to have a four or five-year run that were just the worst four or five years in the mature history of Hollywood ever. That was enough to make me pessimistic. Then you had big 10 pull movies, a few of which were entertaining, but mostly a bad trend. This year, I went back and looked at my movies list, and it was incredibly good, and not just foreign movies like <em>May December</em>.</p><p>Last night, I saw <em>Poor Things</em>, two incredible movies. They're not mainstream Hollywood, but it's not South Korean or Iranian. It's broadly a Hollywood movie. My hope is we had this negative shock to theater going, couch potatoes, Netflix. Movie makers had to adjust. Now they've adjusted. I'm pretty optimistic looking forward. Though I admit there's only one year of a really good data point for cinema, but that one year is so strong. I'm optimistic.</p><p><strong>[00:22:15] Dan: </strong>You indulge in a lot of different artistic mediums. My question to you is, how do you decide what to enjoy next? Say you've got some free time and you're trying to decide whether to read a novel, watch a movie, or maybe you're thinking between visiting a museum versus listening to music. Do you just do what you feel like or do you have some strategy just allocating your time to enjoy different types of art?</p><p><strong>[00:22:35] Tyler: </strong>I think it's pretty impulsive and selfish. There's also a side constraint what is my wife interested in doing? She and I have broadly similar desires. I try to see all the major exhibits that come to town. I think I get to see most of the major movies I want to see. I'm trying to do it all. I wouldn't say I succeed. There's some areas I just don't touch upon that are probably very good, but something like dance I just don't follow. I know I wouldn't have the time, and that's a shame, but there's scarcity.</p><p><strong>[00:23:09] Dan: </strong>What do Jonathan Swift and Peter Thiel have in common?</p><p><strong>[00:23:12] Tyler: </strong>I wrote a paper about Jonathan Swift for a conference sponsored and chaired by Peter Thiel. That paper will be published, but Swift was pessimistic in some ways that Peter is. He was obsessed with questions of technology. He was doubtful as to how much moral improvement there really is in people. He thought that war was likely to recur perpetually or at least the risk of war. He was pretty skeptical about a lot of the academicians and intellectuals of his own time.</p><p>Swift even wrote about venture capital and innovation in Gulliver's Travels. He mocks the world of science, the flying island with all the innovations, and the scientists running around acting like kooks. It's also the only major world in Gulliver's Travels where there is no slavery.</p><p>To have an absurd science pursuing a lot of dead ends is better than the world of slavery. I'm not sure how much you could classify Peter's ideas in that same framework, but I don't think they're totally dissimilar the idea that there's something quite violent in human history. For Peter, it's Girard more than Swift and you just want continuing antidotes to that violence, I think you can find in both Peter and in Swift, and my paper makes these points that will at some point be out.</p><p><strong>[00:24:36] Dan: </strong>When is it coming out?</p><p><strong>[00:24:38] Tyler: </strong>I don't know. They're doing a book. Books take longer, I would guess more than a year, but less than two years.</p><p><strong>[00:24:45] Dan: </strong>Got it. Should a classically liberal society draw the line on crude comedy? When, if ever, should a joke be off limits?</p><p><strong>[00:24:51] Tyler: </strong>If by off limits you mean illegal, I don't think it should be illegal in most cases. Now there are extreme instances such as modern Germany, which I think has made certain jokes illegal for reasons related to Nazi history. I don't know if that's a good idea. I think I would grant there might be a few extreme exceptions of that kind, even there, I suspect it's not a good idea, but if you just mean the comic should be canceled, I don't know. I would prefer just not to patronize the person and not lead some online charge against them. If it's not funny and in poor taste, just ignore it.</p><p><strong>[00:25:29] Dan: </strong>In your analysis of what we might call the new right, just broadly defined, it basically boiled down to how they have a huge mistrust of well-functioning elites. My take here is that the internet is maybe the most causal factor behind this like the Martin Gurri thesis. My question to you is do you think that we'll be able to trust elites in the internet age without censorship or is a free internet going to keep us trapped in this distrust?</p><p><strong>[00:25:56] Tyler: </strong>I agree with you that Gurri is probably right and probably has the best explanation. Gurri himself is modestly optimistic. He thinks internet norms will somehow adapt to this over the next few decades. I don't at all dismiss that possibility. I would make the additional point, the whole internet is about to change because of AI, and whatever the internet gave us two years ago, it won't be what it's giving us five years from now. It's just going to be remixed in some very different way. Probably it will change that it's just making us cynical, but I don't know what that new change will be.</p><p>Imagine you wake up in the morning and you just say to your AI, what was on the internet last night that I might want to see, and it serves it to you. I think that will just be a very different experience, but I don't yet know how.</p><p><strong>[00:26:44] Dan: </strong>That's fair. What do you make of Scott Alexander's theory that the political views swing on a pendulum between generations? I think the name of his post is called Right is the New Left, but the basic idea is you've got like the boomers in Gen X, some of their liberal sway he attributes to a rebellion against maybe more conservative parents. Then today the argument is that what you see with libertarians getting a little bit more interested in some of these either neo-reactionary, whatever you want to call it, like new right types of movements would be a reaction against their liberal parents. I'm just curious, how much weight do you put on a theory like that?</p><p><strong>[00:27:21] Tyler: </strong>I would want to reread Scott's post, but I would say in general, I'm quite skeptical about generational theories of all kinds. I don't think there are usually discreet breaks. There can be with particular events like 9/11, the great financial crisis, COVID probably will be one, but they may or may not overlap with how we classify generations. I just think of the data. There's a lot of papers on generational effects and they don't impress me that much. On this, is there a clear statistical test to show that hypothesis is true? I haven't worked through that literature. My guess would be no, there's not such a test.</p><p><strong>[00:27:58] Dan: </strong>You say the most important thinkers of the future will be religious, but you yourself are agnostic. Who specifically should be religious?</p><p><strong>[00:28:05] Tyler: </strong>I don't know if people should be religious. I think on average you have a better life doesn't necessarily mean you should do it, but you mentioned Peter Thiel before. He's probably been the most influential public intellectual over the last five to 10 years. My other pick would be Elon Musk. Peter is the religious thinker of the two, and Elon is the secular thinker of the two. They used to work together.</p><p>The best columnist in the world right now, I would say, is Ross Douthat who's not only Catholic, but he's very religion and God-oriented in his writing. We already see this. A lot of people ask me about that statement. I'm amazed that to other people, it isn't trivially true. Now religious may be a minority. They just have this very rich source of ideas, inspiration, and motivation that a lot of secular people don't have.</p><p>I'm not religious, but I think in some ways I'm a religious thinker. The sense of personal mission is very strong in me and I think or at least hope it comes out in what I do. I'm not a total exception to the claim.</p><p><strong>[00:29:11] Dan: </strong>What is your implicit theology?</p><p><strong>[00:29:13] Tyler: </strong>A lot of it is American protestant with some Jewish elements mixed in. For someone who grew up in the northeast, basically the 1970s, that's extremely common and indeed garden variety ordinary. I like it, but it's nothing special about me. I'm a regional thinker like most people, and that's my region.</p><p><strong>[00:29:34] Dan: </strong>What did you actually get out of reading Plato when you were very young? I think you said you read it when you were like 12, 13. Can young people really understand classic philosophy?</p><p><strong>[00:29:42] Tyler: </strong>I strongly believe they can, maybe not all, but at least the ones who care. For one thing, you just get a sense of truth being conversational or dialogic and that there's not any single position that can answer all the complaints raised against it. That was one of the most important things I learned. You learn particular techniques of persuasion and argument, and you also learn why they fail. Most of the arguments in Plato's dialogues fail, and often they're pretty bad. It's an interesting question how much Plato knew that or designed them that way? I think he did, but people have different views on that.</p><p>Then just particular dialogue like the Euthyphro question. Well, is it good because God says it's good or does God say it's good because it's good? Whatever you think the answer might be. I'd never heard that question until I read Euthyphro, and I still use that as a mental device. You can just learn a lot of particulars and then a lot of the writing's beautiful. <em>Symposium</em> to me is and was very beautiful. <em>Parmenides'</em> dialogue, in a funny way, I found that beautiful. Yes, you can learn a lot at quite a young age.</p><p><strong>[00:30:51] Dan: </strong>What makes Shakespeare, Proust, Moby-Dick, so much better than a Stendhal, Twain or Thomas Mann? Why is it that there's this tier up that's well above all other writers?</p><p><strong>[00:31:02] Tyler: </strong>I never knew any of those people, of course, but I have to think they were just somehow more extraordinary as human beings, and they had that extra something they could draw upon. That's a hypothesis very hard to verify or refute. Maybe something about their time was especially rich as well. The times of Stendhal, that was obviously a rich, fertile time. There's something ultimately quite mysterious about it. I like how Harold Bloom puts his finger on that mysteriousness.</p><p><strong>[00:31:32] Dan: </strong>Where has Fernando Pessoa most influenced you?</p><p><strong>[00:31:35] Tyler: </strong>The style in which he wrote the book, the book of small things. It's very scattered, I think it's a more fruitful approach than what Nietzsche did. It's more positive. It's about how to find beauty in life in different ways. It's also about how to innovate. It's positive, but there's this melancholy to it as well. That to me is a very rewarding book. I think it should be much more prominent.</p><p><strong>[00:31:59] Dan: </strong>How would the world be different if every person understood that demand slopes down?</p><p><strong>[00:32:04] Tyler: </strong>I don't think it would be very different. I think most people do understand that. They may not articulate it as such. The problem arises when they interpret public events and policies through emotional lenses, mood affiliation, sometimes even evil motives. The people who do bad things, like did Hitler not understand that demand slopes down? I'm pretty sure he understood that. If the price of bullets went up, maybe he'd buy for the army fewer bullets and more of something else, but he was still extreme evil. Unfortunately, that act of education won't get us very far.</p><p><strong>[00:32:41] Dan: </strong>Why is that concept, it seems like you valorize this concept a lot. Is it just that it's so important, but it's already well understood, so there's not much more we can do with it, or why do you often cite this as such an important insight?</p><p><strong>[00:32:54] Tyler: </strong>I don't think it's well articulated. It's well understood in the sense that people live by it in the supermarket. When you're thinking through public policies to keep that opportunity cost, gains from trade, a few other central ideas foremost in your mind, and not let them be crowded out by mood affiliation, evil sentiments, what some people call tribalism. I don't like that word, but you know what I mean, to me it's very important. It's taking seriously what you already know that is difficult for people.</p><p><strong>[00:33:23] Dan: </strong>Got it. Practically speaking, like what's the fundamental difference between a public and a private organization? At the end of the day, aren't these just groups of people that sit in a room to solve problems? Why is there so much fuss about whether work gets done with a private or public sector?</p><p><strong>[00:33:38] Tyler: </strong>There are differences in many areas. There's some areas where you find no difference. In the data, a privately-owned water utility seems quite similar to a publicly-owned utility, noting that the private utility is still regulated heavily by the public sector. You do observe especially in young new firms, a lot more dynamism than you observe in old public sector bureaucracies. That said, if you look at public and private universities, an area I know pretty well, are the private ones, they can act much more quickly, which is good, but they're far less tolerant.</p><p>The problems that have surfaced lately with universities, overall, they're worse in the top-tier private schools than in the public ones, which, maybe, a priori is not what a lot of libertarians would've expected, but it's very clearly true. There's much less free speech, both de jure and de facto in the private places.</p><p><strong>[00:34:34] Dan: </strong>On that topic of just like governance structure for an organization, do you think that this is a solved problem for business? Crypto in some ways is trying to innovate here with like the concept of a DAO, but like broadly speaking, the modern C-Corp seems like it's working pretty well. Would you expect this over the next 50 years to still be the standard for the way that corporations work? Maybe with some tweaks here and there, or do you see innovation coming in governance structures for businesses?</p><p><strong>[00:35:00] Tyler: </strong>I suspect it will still be the standard. I'm not sure it works very well, so my core view is there's some minority, to be clear, of new firms that are super effective and large firms they're effective because of scale, but on the ground they're massively inefficient, and bureaucratized and can be worse than the public sector. I'm not sure we'll change that, but if there's a way to change it, it would be to carve out more dynamic spaces in large companies.</p><p>They're clearly doing well in terms of profitability. Again, that's mainly scale or sometimes because of regulation, but you deal with them, and you think like, "My goodness, this is horrible." Or like, "I could never work here, I would hate that." If anything changes corporate governance, I think it would be AI. I think what we will start doing is having companies, probably new and small ones, take all their data, shove it into the AI, just ask it, what should we change? [laughs] Then listen.</p><p>I don't think it's the established companies that will actually do that. They may play active doing it, and I don't know what all the suggestions will be, but I think that's going to matter quite a bit. It may not change the functional forms much, but we'll see.</p><p><strong>[00:36:14] Dan: </strong>It's pretty striking to me like just how different John Rockefeller seems from Elon Musk from Lee Kuan Yew, all of these really charismatic successful leaders. If you were to lead an organization, how much effort should you put into studying the lives of these great business folks versus just doing?</p><p><strong>[00:36:29] Tyler: </strong>Sam Altman says you should. Most of the people I know who are highly successful in CEO-type roles, they do it. I'm never sure what the marginal product of doing it is. Is it the doing it or is it the feeling that you've done it unless you have a certain level of confidence to just proceed with some decisions? I don't know, but look, the smartest people think you should do it, so I guess I'll side with them, but I'm not entirely convinced, I would say. You said Rockefeller, Musk, I was wondering which is the one you think is charismatic and which you think is not charismatic? That's going to depend, I guess I think they both are, or were.</p><p><strong>[00:37:09] Dan: </strong>Yes. No. Fair enough. I said Rockefeller, Elon Musk, Lee Kuan Yew, I guess very, very different, but maybe charismatic to different people.</p><p><strong>[00:37:17] Tyler: </strong>Exactly. I think some listener for any one of those people, there's a lot of listeners who wouldn't like them at all, or find them charismatic.</p><p><strong>[00:37:27] Dan: </strong>Yes. Rockefeller is like a religious guy, totally abstains. Elon Musk is, I don't know what he is, but it's-- [laughs] I don't imagine that Rockefeller would have the same temperament, so yes. There's a quote about how people make steam engines, and it basically goes that you can make them without knowing thermodynamics, but the people who do know thermodynamics are going to make better steam engines. How much should business leaders actually understand economics? How much does it matter?</p><p><strong>[00:37:55] Tyler: </strong>I think if you're a business leader, the return to knowing a few months' worth of economics is extremely high. You can even imagine packets designed around that. Past that, I think the return is quite low. You might need to know highly specific things about finance or your sector that would involve some additional learning of economics, but it's a learn-as-you-go sort of thing. The idea that a few years spent studying organizational theory, I'm pretty sure that's negative return. It's going to confuse you, get you looking for the wrong things. Again, to know basic economics, highly valuable, and the people in these top positions basically all do know that.</p><p><strong>[00:38:37] Dan: </strong>Yes. Okay, let's assume, I've asked a few people this question, that all progress on AI model stops tomorrow. The doomers win, there's regulation, there's no more new models, but we still have GPT-4, Claude, Bard, the models that are out in existence today. Would you expect there to be an obvious causal boost in total factor productivity over the next decade, just due to the models that are out today?</p><p><strong>[00:38:59] Tyler: </strong>Over the next 20 years. Decade, people are slow, and large corporations are bureaucratic. It could take 15 to 20 years. I'm also wondering in your thought experiment, are token costs allowed to fall or they just set it where they're at? If token costs can fall, that's de facto like having a huge amount of extra progress, even if it's just GPT-4, because there are ways you can layer current models that are quite expensive, but pretty more powerful than what's in your pocket. We would do those at lower token costs. Yes, I think there's a lot of productivity already embedded in what we've done.</p><p><strong>[00:39:36] Dan: </strong>Got it. Why do you think there was, maybe there was, my question would be with Microsoft Excel, the release of that software seems like it would've driven a huge amount of total factor productivity growth. I actually don't know, but typically when I bring this up with people, the answer is, it didn't causally create some. I'm curious how you compare like GPT-4 to the leap that Excel made.</p><p><strong>[00:39:59] Tyler: </strong>I think office software, I don't know if it's Excel or not. Probably it wouldn't be Excel, but in the time period, 1995 to 1998, give or take, office software and inventory management systems did lead to a high boost in productivity, so why did it end? I don't understand, but we got quite a bit out of it. We had some rates of productivity growth of about 3% for a few years. That was our mini escape from the great stagnation on an economy the size of US plus there's a global effect. It's worth a lot, but I agree it's a puzzle why it just seemed to end.</p><p><strong>[00:40:35] Dan: </strong>Yes. To what degree is the internet actually amplifying worries about AI? What I'm thinking about here is LessWrong seems to have done a lot of work on helping people understand the risks, whether you agree with them or not, and it's been hugely influential with the PAUSE movement. If the internet were around during the Trinity test, do you think we would've had the same level of concern about nukes?</p><p><strong>[00:40:59] Tyler: </strong>Oh, sure. If nuclear research had been open to the public in some manner, it probably would've been much worse, and the scientists themselves who knew what they were doing, by definition, they were extremely concerned. They just didn't have any internet. I think people are, by nature, we're rewards that may be optimal in many ways, but we're going to do this no matter what. We'll see who's right and wrong.</p><p>We're not going to let China rule us with Chinese AI. You might think that's a better world because we lower the chance of total doom by 2% and you do the hostile calculation, but the world just doesn't work that way, and blog posts are not going to make it so, and that, to me, is a more important point than whatever your P(doom) is.</p><p><strong>[00:41:47] Dan: </strong>Yes. Sort of on that topic, one of my favorite takes you have is that, times right now, if they feel chaotic, they're actually not the unusual piece. The unusual piece was the relative period of peace we had after the war, up to 9/11. I am curious though, another observation you have is that society has become more feminized starting in maybe the 1970s. Do you think that a more feminized society is a positive or a negative in times of chaos?</p><p><strong>[00:42:15] Tyler: </strong>Well, it's been a positive up until now because it's led to less war, but I think the data from the last three to five years show the trend of less war has been reversing only very recently, but obviously, that's bad. It's possible that the feminization trend is over, and what we're seeing now is an increase in the variation of feminization.</p><p>A lot of parts of the world will continue to feel much more feminized, but there's some sort of backlash where there's like men trying to act manly under different visions of what that means and even seeking out non-feminized world spaces. I think I admit, that worries me but it's a bit like AI. It's also inevitable if you're going to have that much feminization that quickly and just to try to yell stop is not so fruitful, so I would like to think there are more positive ways we could steer that, but I think that trend of a simple increase in feminization, that's probably over, even though it doesn't feel like it's over because of this variance effect.</p><p><strong>[00:43:18] Dan: </strong>Yes. Interesting. Why should someone read all of Fernand Braudel's work?</p><p><strong>[00:43:23] Tyler: </strong>I don't know if they should read all of it. The huge three-volume set on structures of material life. That's amazing. It's incredible. I got a lot out of those. The multi-volume set on the Mediterranean is excellent, like it as much as the other one. His later book on France was interesting but not really all that well-developed. There's other things he's written and I haven't read. You should read his major works. I'm not sure you should read all of it. I don't know.</p><p><strong>[00:43:53] Dan: </strong>What is the best argument against Scott Sumner's proposal for NGDP futures targeting?</p><p><strong>[00:43:57] Tyler: </strong>Well, there's a number of different parts of the proposal, but the first point I would make is we don't have an NGDP futures market, which Scott readily admits. You just can't do it. Now, Scott favors experimenting with subsidizing such a market, and I'm fine with that experiment. We've done that on a very, very small scale, and it hasn't taken off. My guess is those experiments won't work. Then there's the backup question. What about just targeting NGDP level in some manner?</p><p>Mostly, I agree with Scott, to be clear, but I think my pushback would be, can we ever make that a rule? Like Carl Schmidt, sovereign is he who decides the exception, I don't think you really can bind most parts of your government with rules. Certainly, not your central bank, not your treasury, not your foreign policy, so I favor NGDP is a kind of idea you inject into the mix to get central bankers more excited about doing the right thing and less fearful, but I don't think it can be a rule. I don't think there's some better rule per se. I just don't think it's an area where rules can make sense, and indeed, you saw in the pandemic, there's no rule that could have prepared us for what to do. Whatever you think we should have done, no rule specified in advance would've handled that well.</p><p><strong>[00:45:15] Dan: </strong>Yes. I am curious, do you think that this is a bullish case for AI taking over humans then because, presumably, for AI to automate things, that's got to be, to some degree, like rule-based, but if we need like a human in the loop, would that suggest that like, there will still be quite a bit of human employment when AI takes off?</p><p><strong>[00:45:36] Tyler: </strong>There's a bunch of different questions in there. The first is, how does AI relate to rules? I think one thing we learned from AI is the notion of a rule makes less sense than we think. Say, you ask GPT questions about monetary policy, about Ferrante, whatever, it'll keep on giving you answers. The answers might vary, and they're based on processes that are not transparent. Is that like GPT discretion, or do we want to say, our rule is always listen to ChatGPT?</p><p>It's like both a total rule and total discretion at the same time, and that dimensionality of how rule-like it is, it just seems to become moot. That's what I take is one big lesson of AI. Our rules make less sense than we thought, but then there's the question, well, how many people will it put out of work? I think 20 years from now, the number of people unfed staff can be lower than it is now, but not a tiny number. I'm struck by the fact that the research staff at Open AI itself, it was not too long ago, I was told nine people.</p><p>It's a remarkably small number for what might be one of the most important creations in all of human history. Some other things will become like that. I'm not sure how much and how many, but we're going to see other things like that are mid-journey. I think when it was doing peak innovation, total staff, I don't even mean research staff, just total employees was eight or nine, and maybe they had contractors. I get all the ambiguities, but still, that number should shock us.</p><p><strong>[00:47:06] Dan: </strong>Yes, I've often wondered if, actually, that could be a business model where you just say like, pick your enterprise software or pick your existing business and your whole business model is like, we're just going to build this but with a 10th of the employees and use AI from the ground up. Presumably, for firms that already exist, the transition to downsize and leverage AI would be much harder than it would be for someone to start up and do it from the ground up, so that's been a pet theory of mine.</p><p><strong>[00:47:32] Tyler: </strong>I think that would be a big trend. I very much agree with that, and it will shock people. It gets back to your earlier question, how might corporate governance change? Even if the formal structures don't change, if a whole tier of firms is way, way smaller, it will feel very different.</p><p><strong>[00:47:48] Dan: </strong>Yes.</p><p><strong>[00:47:50] Tyler: </strong>Then what you rope into the people who are not doing the jobs that now don't exist, that will be its own change, so there's going to be like a lot more gardeners everywhere, I don't know, carpenters, people greeting you at the restaurant, I think there'll be full employment, but you're going to see a lot of very weird jobs, like you do that, like you come over and you massage dogs' paws for rich people or something, I don't know, it's going to be weird.</p><p><strong>[00:48:17] Dan: </strong>Yes. [laughs] One of Scott Summer's takes, I've heard you say you agree with, and this is one of my favorites, is that he doesn't believe that the 2008 housing, asset runup, was actually a bubble, and he doesn't believe Bitcoin was a bubble either, the price runup in both of those. My question to you is like, how long of a time horizon do we need to wait to assess whether or not something is a bubble? If two lips come back, we're not going to say the Dutch were just too early, it wasn't actually a bubble. Well, what's the time horizon we need to make that assessment?</p><p><strong>[00:48:48] Tyler: </strong>I don't think it's time per se, but if you look at the whole context, I think you usually can say, so I mostly agree with Scott, but if you're saying, well, some Las Vegas suburbs, homes there were indeed a bubble. That's probably true, and it's past the point where, oh, if 100 years from now, they're worth a lot because of desalination in AI, and you want to say, oh, it was never a bubble. Oh, come on.</p><p>Most real estate has definitely come back, but there's some markets where I think you can say it was a bubble, quite a few crypto assets that most people haven't even heard of where you, like maybe it wasn't even a bubble, it might have just been a fraud, but it had bubbly-like aspects in early stages and you can render judgment, I think, in many cases.</p><p><strong>[00:49:33] Dan: </strong>In what area do you think there are too few emergent ventures applications?</p><p><strong>[00:49:38] Tyler: </strong>You mean geographic area or area of work?</p><p><strong>[00:49:40] Dan: </strong>Area of work, yes.</p><p><strong>[00:49:42] Tyler: </strong>Well, really good people working with ideas, and I think especially in small countries, it's so much easier as an individual to have an impact in a smallish country. I lived in New Zealand for a while, so I saw that I lived it, then in a large country like Brazil or India. People working with ideas in small countries, to me, still seems radically under provided. I would support more of it if the proposals were good.</p><p><strong>[00:50:10] Dan: </strong>Do you think we'll ever have a coherent theory of what happened to the economy in 1972 even if we just poured more people into the economics profession to work on it? Is that a problem that can be solved?</p><p><strong>[00:50:19] Tyler: </strong>I don't think we'll know much more than what we know now. Our opinions will change a lot based on what the future looks like. Say, if AI is a big deal as I think it will be, we'll then be more inclined to say we had exhausted the previous general-purpose technology. There won't really be new evidence about '72, '73, but it will feel like that's a critical idea maybe in a way that's even a little unfair. I think that's what will happen, but we won't actually get new evidence, not much.</p><p><strong>[00:50:51] Dan: </strong>What about this other big mystery? Will we ever have a coherent theory of how differences of culture interact with traditional economic mechanisms? Basically, what is culture's impact on traditional economics?</p><p><strong>[00:51:03] Tyler: </strong>You've deployed these two words, culture and theory. They're not literal direct opposites, but once they're both in play, I know they're not really going to fit together. That's in a sense what culture is. It's some of what's left over after theory is gone. Theory is demand curve slope downwards, and culture is something else. We're not going to make that much progress. We will catalog it much, much better, however.</p><p><strong>[00:51:28] Dan: </strong>Who is more right about immigration? Brian Caplan or Garrett Jones?</p><p><strong>[00:51:31] Tyler: </strong>People dispute what Garrett thinks and says, to be clear. I know Garrett well, and I feel I know what he thinks. I would say Garrett is clearly correct. Garrett, in my view, is pro-immigration, but he wants to be very careful about who is let in. Now, that said, I think Garrett should be more pro-immigration than he is. I view myself as more sympathetic to taking in what typically would be called low-skill immigrants. I'm not sure they are also low-skilled than Garrett might be.</p><p>I think Brian's open-borders idea is crazy and totally unworkable. The actual effect of it would be to elect someone fascist-like in countries that would try it. You wouldn't even get open borders. You would have a severe backlash, and open borders for a wealthy country just wouldn't prove workable. I would very much like to increase immigration levels. It just has to be politically sustainable.</p><p>The skilled people, they need nannies and people to clean their homes and wash their cars, whatever. Skilled immigration, you also want "unskilled immigration". Again, these unskilled people, they can do so many things I can't. They're awesome. Try to put together your own ping pong table, can't do it. Have over a "unskilled immigrant" like that. Awesome. Thank you.</p><p><strong>[00:52:56] Dan: </strong>Yes. If you had to restart your career as an academic but you couldn't do economics, what would you do?</p><p><strong>[00:53:03] Tyler: </strong>I have to be an academic. It's a bad time now to enter academia. It's shrinking, there's a lot less freedom of speech. Maybe I would consider being a legal academic or philosophy, but it would be grim. I feel I got very lucky, my area I ended up in when I entered it was really a golden age.</p><p><strong>[00:53:23] Dan: </strong>If you weren't restricted to being an academic, what would you do?</p><p><strong>[00:53:28] Tyler: </strong>You could ask what do I do now. A lot of what I do now, you could call publishing. I guess I would be a publisher. It's not a hypothetical, I am a publisher, and I work for a publisher, which is <em>Bloomberg Opinion</em>. In this sense, the question's already answered, right?</p><p><strong>[00:53:43] Dan: </strong>It is, yes. Detached from GMU and do the same thing.</p><p><strong>[00:53:48] Tyler: </strong>Yes, that would be it, and I could.</p><p><strong>[00:53:52] Dan: </strong>How should one decide which cultural codes are actually worth cracking? For example, would you ever go to Burning Man?</p><p><strong>[00:53:58] Tyler: </strong>I've been told at Burning Man, there's no internet, which for me would be a work problem and just a fun problem. I've been told there's a lot of drugs which doesn't interest me. I guess the cost would be high. I think it would be very interesting, but probably I wouldn't go. I think there's infinite number of cultural codes to crack. You just have to choose what you're interested in and/or what overlaps with work you need to get done. There'll always be more, you don't have to worry about running out. I wouldn't quite say it's all equally interesting, but all of it is so interesting. You don't even have to worry about that either.</p><p><strong>[00:54:33] Dan: </strong>Okay. I really like this quote. There's an interview on <em>The Browser</em> between Yuri Bram and Applied Divinity Studies. One of the quotes is just that the marginal revolution extended universe is incredible. I can't get over how many writers and thinkers I know seem to have got their start by a link from Tyler Alex, the MR Universe. It doesn't have a name like the rationalist or the effect of altruist. My question to you is, should it or is that by design?</p><p><strong>[00:54:59] Tyler: </strong>It's not by design, but I don't feel I want it to have a name. It may be better operating under the radar. Also, the original vision for Emergent Ventures was to operate under the radar. There's zero advertising, but now, it is, in fact, pretty well known that over time might make it worse, probably will make it worse. You want to keep a lot of things under the radar. That's hard for people because they want to earn income from it or fame or some kind of rent or some reputational rent. The world is always trying to convert you into that existence, and then you have to keep on pushing back.</p><p><strong>[00:55:33] Dan: </strong>Fair enough. A question I wanted to ask you that I find really interesting is that you seem to maybe be one of the least likely people to get into like an emotional confrontation with someone. I think this was really on display. For example, in your conversation with Amia Srinivasan, your post on how you practice at what you do though has no mention of working on your temperament. Is your temperament something that you work at?</p><p><strong>[00:55:55] Tyler: </strong>I think temperament is pretty genetic, I don't know the literature on that. That's what I observed, and when you see young babies, toddlers, it seems that how they're going to be, that those parts of it are pretty built in. There are advantages and disadvantages to detachment. I think I have both those advantages and disadvantages, and that's just my hand of cards, and I'm going to play it, so to speak. I don't sit around trying to be more detached. I think as you get older, you do become more reasonable operating from almost any basic initial temperament. That's happened to me as well.</p><p><strong>[00:56:33] Dan: </strong>If I'm someone who says, "Man, I get too hotheaded on Twitter and I say things I shouldn't," is this just baked into the cake of my genetic predisposition or is it something that you think you can work at?</p><p><strong>[00:56:43] Tyler: </strong>Something that operational, you can work at. I know people, they'll take Twitter off their phones. I don't know how effective that is, but it seems if demand slopes downward, it should be at least somewhat effective. Then something that specific and operational, just don't be on Twitter at all. A lot of people have left, so that, you can manage. I don't know that you can change yourself so much, but particular things like, should I go visit Uncle Joe and get mad every Thanksgiving because he supports Trump? That, you can fix for sure. I'm not sure you can, again, change how you engage with people more generally.</p><p><strong>[00:57:16] Dan: </strong>Yes. Should more people create their own version of Tyrone?</p><p><strong>[00:57:19] Tyler: </strong>I hope not. Should more people blog at all? There, I would say yes. I would like to see more blogs and less Twitter, less TikTok, but clearly, it's gone the other way and for reasons that are probably permanent until AI changes things again.</p><p><strong>[00:57:36] Dan: </strong>I think what I'm getting at here is maybe more at the conceptual level. The way that I view Tyrone anyways is a framework for thinking through points of view from someone else's perspective.</p><p><strong>[00:57:45] Tyler: </strong>Oh, yes. That, much. much more of, and you should have hundreds of Tyrones in your head of different kinds.</p><p><strong>[00:57:52] Dan: </strong>Do you have others that are not Tyrone that don't come out?</p><p><strong>[00:57:54] Tyler: </strong>Sure. I even had a phrase for this once, I called it Phantom Tyler Cowen or Phantom fill in the blank. I said to someone, there should be Phantom Tyler Cowen sitting on your shoulder, not as the only phantom by any means, responding to what you're thinking, saying, writing, and you test it against the phantoms. I'm a big believer in that.</p><p><strong>[00:58:14] Dan: </strong>Interesting. You use phantoms when you're thinking through something. Actually, who are your phantoms? Who do you look up to and get advice from today?</p><p><strong>[00:58:22] Tyler: </strong>It depends what I'm writing on, but in economics, it would basically be the famous economist. I don't think there'd be any big surprise in there. I don't write fiction. If I'm doing a podcast, I'm not really thinking as I'm going along about that. You might have a thought ex-post, but I don't really think like, "Oh, Joe Rogan wouldn't have liked that there," or I don't know. The podcast is just done and you're already busy with the next one.</p><p><strong>[00:58:48] Dan: </strong>Another thing I find interesting about Tyrone is that you said that he noticeably has less posts recently. One of the reasons you gave for this before is that you spent less time on him because the world has become a more bizarre place. I would actually think that Tyrone is much more valuable when the world has become a more bizarre place, like what gives there?</p><p><strong>[00:59:08] Tyler: </strong>I think Tyrone works to the extent he does because he's shocking or was shocking. Now, it's much harder to shock people given what we accept in the normal course of ordinary affairs. Tyrone just sounds like another person writing, it doesn't mean Tyrone is never effective, but I do think it lowers the value of Tyrone.</p><p><strong>[00:59:28] Dan: </strong>I'm going to read to you a passage. There was a profile of Joyce Carol Oates in <em>The New Yorker</em> about a month ago, and I think he linked to it one day. I found this really interesting. Here's a quote about Joyce, to waste time made her feel slithering centerless. She wrote in her journal, a 500-pound jellyfish unable to get to this desk. Oates was friends with Susan Sontag who had a busy social life. After the two spent time together in New York City, Oates told her, "In some respects, I'm appalled by the way you seem to be squandering your energy."</p><p>Then she reminded Sontag that the pages you perfect day after day will be the means by which you define your deeper and more permanent self. Now, I found it interesting, Sontag also wrote in her diary that Joyce Carol Oates writes all the time, she can meditate while writing. She says she has no feelings. There's a dispute here. Who do you think is right?</p><p><strong>[01:00:13] Tyler: </strong>Well, they're both right for what they were born with. Joyce Carol Oates, don't know her, but I strongly suspect she could not have been to Sontag. I also strongly suspect Sontag will endure much more than Joyce Carol Oates. I don't know that enduring is the standard. Robin Hanson and I have this chat sometimes. He thinks you want to do things that will last a long time. I don't really see the value in that.</p><p>I think you want to do things that somehow generate some energy now, and you're not going to know it's going to last. Probably none of it will last. Very few things last, and don't worry about lasting. It's hard enough to generate energy or response now. Now, Oates has written a large number of novels, but the best things of Sontag, I don't think we'll read them 100 years from now, but I think we'll read them 50 years from now. That's pretty impressive.</p><p><strong>[01:00:59] Dan: </strong>You think it's just her output that was more valuable than Joyce? Not sort of the way she chose to spend her time?</p><p><strong>[01:01:04] Tyler: </strong>Her peaks?</p><p><strong>[01:01:04] Dan: </strong>Yes, her peaks.</p><p><strong>[01:01:05] Tyler: </strong>She may have had lower bottoms. The Chairman Mouse stuff, I'm really not crazy about. Oates, by being so obsessed as worker bee, I think avoids the flaws of Maoism and all those overreactions. There's some benefits to the Oates method that are sometimes under-discussed. You can't sit around for a long time and talk yourself into something really stupid because you need to get the next novel out.</p><p><strong>[01:01:31] Dan: </strong>What's the right amount someone should invest in their social life? Because really, the argument here seems also over like how much time someone wants to spend being a New York socialite versus really committing this to work. What's your view on that, or does that just vary by person?</p><p><strong>[01:01:45] Tyler: </strong>The Sontag, when you obviously have no internet, I strongly suspect, and I've read the Benjamin Moser biography of her, which is very good. I suspect you have to do a lot of that in New York or somewhere like that to be connected to ideas at all. I would be surprised if she had overinvested in social life. It's more of a cutting question now. There's just a lot more dimensions along which you can try to optimize. Like, "Oh, I'm going to stay at home, but I'll be in these great WhatsApp groups." I don't know. I guess it just depends. I've tried to have my work and social life overlap as much as possible. I like that, but it's not for everyone.</p><p><strong>[01:02:21] Dan: </strong>That's a good point. I guess socializing could have been a totally different thing back before the internet.</p><p><strong>[01:02:26] Tyler: </strong>It was the way you got in touch with ideas. Like you would hear things on the radio and, again, then especially, New York was the place to be in a way it isn't now. Sontag was well-connected, well-located, had everything going for her. Of course, she built it that way. Paris was another place you could be. There were a few others, but my goodness, like most of this country, you just didn't have a chance of being an important thing. It was very hard.</p><p><strong>[01:02:55] Dan: </strong>To what degree do you think it's still important to be in somewhere central like that? Obviously less, but are there still benefits to being in a San Francisco, New York?</p><p><strong>[01:03:03] Tyler: </strong>Depends on what you do. Certainly, if you're doing startups, there's a labor market you need to worry about. There's a reason why so many of those are concentrated. Inspiration, levels of ambition, I think it's very important to be surrounded by other ambitious people, that makes most of Europe much worse. It's still important. If you're an academic in Ann Arbor, which has always had a very good school, but you are much better off in relative terms than you would've been before the internet. Absolutely.</p><p><strong>[01:03:32] Dan: </strong>Do you think that Europe's problems in general, just with lagging innovation lately, are more cultural or more political? In other words, do you think that electing new officials into the EU could get this thing turned around, or do you think it's the culture in Europe is really just turned away from ambition?</p><p><strong>[01:03:51] Tyler: </strong>I think it's the culture, but I'm more positive on that culture than many people are. All the criticisms, growth is too slow, too bureaucratic, overregulated, that's all true. There is some cultural capital still embedded deep in Europe that they can have public conversations about things and arrive at non-crazy answers. That's very deeply embedded there. That's part of the culture too. It may even be related to some of the reasons why they overregulate, and Europe's pretty robust. I wrote a column recently about France. France has become an underrated nation, oddly enough. I'm not a doomster on Europe, but obviously, they face serious challenges.</p><p><strong>[01:04:30] Dan: </strong>If you didn't need to sleep and had a marginal seven hours per day over everybody else, where would you spend the time?</p><p><strong>[01:04:36] Tyler: </strong>Oh, seven hours. I can't really get anywhere in that seven hours. I don't think I would do anything very differently with it than what I do now. I would just have more of it.</p><p><strong>[01:04:45] Dan: </strong>Do you ever waste time? What would that even mean to you?</p><p><strong>[01:04:48] Tyler: </strong>Isn't it all waste time? I don't know, what's the waste?</p><p><strong>[01:04:52] Dan: </strong>Fair enough. Fair enough. Do you think that Bach would've been even more productive with the internet, or would it be a distraction?</p><p><strong>[01:04:59] Tyler: </strong>I don't see how he could have been more productive. It seems the thing he could have used was better-functioning pens, and that would've made him more productive. Even if you think, well, what if Bach could've had an AI and he talks into the AI and it writes out all the musical parts for him? We don't know the Bach production function. He did so much. I somewhat suspect it was the writing it out by hand that generated the musical ideas. It was actually pretty well set to optimize the bar.</p><p><strong>[01:05:29] Dan: </strong>Why there are no real competitors to marginal revolution? No one out there just said, "Hey, I'm going to go blog five times a day and put assorted links out there." Correct me if I'm wrong, but I haven't seen a single competitor truly emerge.</p><p><strong>[01:05:41] Tyler: </strong>I don't know of one. It's been over 20 years. Well, it's hard to do. That's a stupid kind of answer. The downward-sloping demand curve point reemerges. You need to put a lot really, in a way, your whole life into it. It's not that writing takes that long, but you have to live the kind of life where you're always discovering new content, and most people just don't want to do that, or they're not good at it. They can't do it. Some mix of the above.</p><p>I guess that's the binding constraint. More generally, I'm struck by how many of the people who are still around doing well. They're not bloggers anymore, but Matt Yglesias, Ezra Klein, Megan McArdle, they all came from that early burst. There was something in the air then. I don't know what it was. There've been some follow-ups. Scott Alexander comes more recently, but that's now even like 10 years ago, I think. These things are weird. You see similar patterns in music, in the arts, bursts, and then dry spells. We don't understand them.</p><p><strong>[01:06:42] Dan: </strong>Why have you kept blogging?</p><p><strong>[01:06:44] Tyler: </strong>I still learn from it all the time. Gives me access to a lot of smart and interesting people. It helps me get other things I do like the podcast. I have no plan to quit anytime soon. Not at all. I'll probably do it for as long as I can.</p><p><strong>[01:06:58] Dan: </strong>Do you think that you just value those things more than the other people who quit, or why did you have the say in power and everyone else did not?</p><p><strong>[01:07:06] Tyler: </strong>I don't find it strenuous or stressful to have a deadline of every day. That's part of it. That may get to my equanimity, our detachment that you mentioned earlier. I can read and absorb material much faster than other people. Even if comparable like IQ or education, that's an important underlying factor. If I could read, say, good, read 5 to 10 times faster than a lot of comparable people, that's just a huge edge.</p><p>I know a lot of people don't believe I do it. Fine, think what you want. I don't want you to believe me. I don't want competitors. There's always something. Then the notion that I could go almost anywhere in the world except North Korea and meet up with interesting people who would engage with me. That's a super high value, and I don't want to give that up.</p><p><strong>[01:07:52] Dan: </strong>Okay. Some questions loosely about New Jersey. I think you mentioned before that you're a fan of <em>The Sopranos</em> and I believe, did you grow up around James Gandolfini? I don't know if you had a relationship with him.</p><p><strong>[01:08:02] Tyler: </strong>He and I worked at the same Valley Fair in Hillsdale, New Jersey at the same point in time. I was in the produce department and he was pushing carts. Now, I didn't know him. My sister told me this, I almost certainly saw him around. I didn't think, "Oh, hey, that's James Gandolfini of <em>The Sopranos</em>," because we were 15, 16 years old. He was there and I was there the same supermarket you could call it. It was like a totally inferior, immature Costco, to be fair.</p><p><strong>[01:08:35] Dan: </strong>Okay.</p><p><strong>[01:08:36] Tyler: </strong><em>The Sopranos</em>, what they show in the episodes, I recognize an awful lot of these places. Satriale is where they meet, especially early on, the butcher shop on Kearny Avenue. I was born in Kearny in the Kearny Hospital, which is only a few blocks from that butcher shop. That whole street view, I know well. The views driving at the 95 overpass and all that. It's like, "Yes, yes. I know that." The sporting goods store on Route 17, very familiar to me.</p><p><strong>[01:09:06] Dan: </strong>Oh, fascinating. Here's the question. Why is Tony such a horrible talent spotter? He gave Christopher so many chances. His best friends are constantly betraying him. What is Tony Soprano's fundamental flaw with spotting talent?</p><p><strong>[01:09:19] Tyler: </strong>Well, there's negative adverse selection into all of those roles and indeed into Tony's role. You're selecting for who is brutal, who will be at least superficially loyal to you, which is not a good way to select. You end up with fools, cretins, unreliable people who have been trained in enough brutality. They can also do you a lot of harm. Even if they don't kill you, they may go out and kill someone else in a way where you get whacked in return and you're liable for them. That's what happens. Selection so often matters more than incentives.</p><p><strong>[01:09:51] Dan: </strong>Interesting. One of your learnings that you mentioned from the produce department and actually being from rural Michigan, I've sensed this to some degree as well is that there's many people who are highly intelligent and capable but don't quite have the conceptual frameworks to put them on a road to success. My question to you is, what's the real problem here? In what ways would society have to change for you to have the most capable coworkers that you had in the produce department, have a shot at getting a CEO-type job when they grow up?</p><p><strong>[01:10:19] Tyler: </strong>Some of the people I worked with I thought were quite smart and certainly not lazy and charismatic, but there was some conceptual issue. Could they realize on their own they needed to go out and acquire all of this conceptual equipment, and they didn't have that. That's really quite important. It's related also to your <em>Sopranos</em> question. Maybe we should value that more, and also, in some way, teach it more. It sounds like a weird thing to teach. Maybe we teach it a lot of indirect ways by showing people role models. Maybe there would be effective ways to teach it more explicitly.</p><p>I would look into that. In a sense, that's what I try to do, teach people. There's this life you can live where you're an infovore. It's not for most people, but it might be for you, or maybe at the margin, you want a little bit more of it, and here's what it looks like. It's not that bad. I'm trying to do that in a way.</p><p><strong>[01:11:14] Dan: </strong>Say, a talented 14-year-old who has broad intellectual ambitions comes to you for advice, under what circumstances are you going to recommend that they spend some time doing some sort of manual labor? Is it always, or are there special circumstances where you say, "Yes, you should go work construction for two years, or the produce market"?</p><p><strong>[01:11:30] Tyler: </strong>It depends what country they're from, but if they're from the United States and a middle-class or upper family, I would recommend that. If you're from a country where you're just trying to get away from that, probably different. Why not, most of all? It used to be standard that people would work as teenagers. Teen employment rates have plummeted. There's way too much homework, and we've replaced actual work with extracurriculars, which I think is a terrible trade-off.</p><p><strong>[01:11:58] Dan: </strong>Getting close to wrapping up here. I have just a quick rapid-fire question for you. I'm going to name a few people and I want you to just say the number one thing that you've learned from them. All right. We'll start with Thomas Schelling.</p><p><strong>[01:12:09] Tyler: </strong>Game theory came alive for me when I read Thomas Schelling, which is before I met him. There's a separate question, what did I learn from knowing Thomas Schelling? He had the ability of always having an appropriate anecdote or good or insightful story ready no matter what the topic was. Even at quite an old age, he could do that better than anyone else could in the room. I guess I think I've never learned that from him like I should have. I failed to learn that. I'm not really good at that the way he was. I would say the story there is what I didn't learn.</p><p><strong>[01:12:44] Dan: </strong>What about Jonathan Swift?</p><p><strong>[01:12:46] Tyler: </strong>I think <em>Oliver's Travels</em> is one of the all-time great books. You can keep on rereading it. It always has additional richness and human motivation. He has a very deep understanding of how people respond to situations, and what strangeness is like, and how easily we can turn bigoted or intolerant. Those would be some things I learned from Swift.</p><p><strong>[01:13:08] Dan: </strong>The Gioia brothers.</p><p><strong>[01:13:09] Tyler: </strong>You mean in painting?</p><p><strong>[01:13:11] Dan: </strong>I mean Ted Gioia from The Honest Broker, and Dana Gioia from--</p><p><strong>[01:13:16] Tyler: </strong>Oh, Gioia. Sure. I only met Ted once. I've learned a lot from his writings on music and from the podcast with him, but I haven't had really much direct contact at all. Dana, I've spent a lot of time with. American music of the sort like Samuel Barber, I understand much better because of Dana. A lot of opera, I understand much better, a good deal of poetry theater. I would say the American Arts to the first half of the 20th century. A lot about the American West. Quite a bit about how the NEA, National Endowment for the Arts, works, which he was head of for a number of years, and how to negotiate bureaucracies. Some amount about marketing. I've learned a lot of different things from Dana.</p><p><strong>[01:14:04] Dan: </strong>Camille Paglia.</p><p><strong>[01:14:06] Tyler: </strong>I only met her once. Her book, <em>Sexual Persona,</em> really quite inspired me, and it made me realize I should write for broader audiences, and that one could take what is underlying scholarly material and make it something people might want to read. Now, in the particulars I learned from her, my whole understanding of Edmund Spenser comes from her, and I think that's basically correct. She changed my mind about quite a few things in the Renaissance. Androgyny and Shakespeare, I got quite a bit of that from her. The Pre-Raphaelites, I understand better because of her. There's a number of other details I could relate but I've learned a lot from her.</p><p><strong>[01:14:44] Dan: </strong>Shruti.</p><p><strong>[01:14:46] Tyler: </strong>Oh, it's an enormous amount about India and talent search in India from Shruti. The Indian Constitution, most of all, how the Indian policy world works, how to think about different regions of India, what's the best Indian food. Really quite a bit about Veena playing and Carnatic music. A little bit about Bollywood, which she knows incredibly well. It just doesn't interest me that much. One of my failings is I haven't learned enough from Shruti about Bollywood. Maybe that will be remedied, but the movies for me are too long. I do think they're good. It's just very time-intensive. That would be my Shruti answer.</p><p><strong>[01:15:20] Dan: </strong>Tyler, it's been wonderful talking to you today. Thank you so much for your time.</p><p><strong>[01:15:25] Tyler: </strong>Same here. A real pleasure.</p>]]></content:encoded></item><item><title><![CDATA[Samo Burja]]></title><description><![CDATA[The inaccuracies of history, gerontocracy, and nuclear energy]]></description><link>https://www.danschulz.co/p/samo-burja</link><guid isPermaLink="false">https://www.danschulz.co/p/samo-burja</guid><dc:creator><![CDATA[Dan]]></dc:creator><pubDate>Thu, 04 Jan 2024 00:56:28 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/140339171/96ae3db5e1c6ab945d372a408e359940.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<div id="youtube2-KGjUOfXig9k" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;KGjUOfXig9k&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/KGjUOfXig9k?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8a650f7ddd3a2dc2cc2774bb10&quot;,&quot;title&quot;:&quot;Samo Burja&quot;,&quot;subtitle&quot;:&quot;Dan Schulz&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/0HR4zXvKf3ftDvHs1rUPJA&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/0HR4zXvKf3ftDvHs1rUPJA" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><h3><strong>Timestamps</strong></h3><p>(0:00:00) Intro</p><p>(0:00:21) Knowledge decaying over time</p><p>(0:06:02) History and the overabundance of information</p><p>(0:11:10) History as a profession</p><p>(0:13:32) What would Hitler and Stalin think of our textbooks?</p><p>(0:16:44) What would we gain from a more accurate picture of history?</p><p>(0:22:31) Live Players and Great Founders</p><p>(0:25:03) Live Players and philosophy</p><p>(0:30:15) Is China more fragile than the US?</p><p>(0:42:34) Gerontocracy and succession</p><p>(0:51:53) Will Capitalism last 100 years?</p><p>(1:04:02) Transition to clean energy</p><h3><strong>Links</strong></h3><p>- <a href="https://twitter.com/SamoBurja">Samo&#8217;s Twitter</a></p><p>- <a href="https://samoburja.com/">Samo&#8217;s homepage</a></p><p>- <a href="https://www.bismarckanalysis.com/#/">Bismarck Analysis</a></p><p>- <a href="https://samoburja.com/gft/">Great Founder Theory manuscript</a></p><p>- Samo&#8217;s new podcast, <a href="https://www.youtube.com/playlist?list=PLEG8Q6Fb-Gi1Vz-zb0Bvbj5zaeg7voiOP">Live Players</a></p><p>- Subscribe to Undertone on&nbsp;<a href="https://www.youtube.com/channel/UCtRjQ8EA3Via7KXzv9F173Q">YouTube</a>,&nbsp;<a href="https://podcasts.apple.com/us/podcast/the-world-of-yesterday/id1693303954">Apple</a>,&nbsp;or <a href="https://open.spotify.com/show/59YkrYwjAgiKAVMNGWPaLE">Spotify</a></p><p>- Follow&nbsp;<a href="https://twitter.com/dnschlz">Dan on X</a></p><p>- Share anonymous feedback on the podcast here:&nbsp;<a href="https://forms.gle/w7LXMCXhdJ5Q2eB9A">https://forms.gle/w7LXMCXhdJ5Q2eB9A</a></p>]]></content:encoded></item><item><title><![CDATA[Vitalik Buterin]]></title><description><![CDATA[Libertarianism, vision for Ethereum, and AI]]></description><link>https://www.danschulz.co/p/vitalik-buterin</link><guid isPermaLink="false">https://www.danschulz.co/p/vitalik-buterin</guid><dc:creator><![CDATA[Dan]]></dc:creator><pubDate>Tue, 26 Dec 2023 19:46:03 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/140098794/07fb156d7458c6ef2c9df355c59cc3c1.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<div id="youtube2-skxA10SELMk" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;skxA10SELMk&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/skxA10SELMk?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/undertone/id1693303954?i=1000639748548&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000639748548.jpg&quot;,&quot;title&quot;:&quot;Vitalik Buterin&quot;,&quot;podcastTitle&quot;:&quot;Undertone&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:4746000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/vitalik-buterin/id1693303954?i=1000639748548&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2023-12-26T19:01:50Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/undertone/id1693303954?i=1000639748548" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8a6edd46d4bab12ac4d9019823&quot;,&quot;title&quot;:&quot;Vitalik Buterin&quot;,&quot;subtitle&quot;:&quot;Dan Schulz&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/49CE40i46lADAL37I1Gs25&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/49CE40i46lADAL37I1Gs25" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><h3>Timestamps</h3><p>(<a href="https://www.youtube.com/watch?v=skxA10SELMk&amp;t=0s">0:00:00</a>) Intro</p><p>(<a href="https://www.youtube.com/watch?v=skxA10SELMk&amp;t=21s">0:00:21</a>) Governance mechanisms</p><p>(<a href="https://www.youtube.com/watch?v=skxA10SELMk&amp;t=152s">0:02:32</a>) Talent in crypto</p><p>(<a href="https://www.youtube.com/watch?v=skxA10SELMk&amp;t=602s">0:10:02</a>) Talent clusters</p><p>(<a href="https://www.youtube.com/watch?v=skxA10SELMk&amp;t=923s">0:15:23</a>) Crypto conferences</p><p>(<a href="https://www.youtube.com/watch?v=skxA10SELMk&amp;t=1147s">0:19:07</a>) Vitalik&#8217;s vision for ETH</p><p>(<a href="https://www.youtube.com/watch?v=skxA10SELMk&amp;t=1731s">0:28:51</a>) Libertarian canon</p><p>(<a href="https://www.youtube.com/watch?v=skxA10SELMk&amp;t=2845s">0:47:25</a>) ETH cultural or technological innovation?</p><p>(<a href="https://www.youtube.com/watch?v=skxA10SELMk&amp;t=3199s">0:53:19</a>) Learning languages in the age of AI</p><p>(<a href="https://www.youtube.com/watch?v=skxA10SELMk&amp;t=3425s">0:57:05</a>) AI and d/acc</p><p>(<a href="https://www.youtube.com/watch?v=skxA10SELMk&amp;t=4003s">1:06:43</a>) P(doom)</p><p>(<a href="https://www.youtube.com/watch?v=skxA10SELMk&amp;t=4337s">1:12:17</a>) Humanity&#8217;s descendants</p><p>(<a href="https://www.youtube.com/watch?v=skxA10SELMk&amp;t=4633s">1:17:13</a>) If ETH succeeds, why?</p><h3>Links</h3><p>- <a href="https://vitalik.eth.limo">Vitalik&#8217;s blog</a></p><p> - <a href="https://twitter.com/vitalikbuterin">Follow Vitalik on X</a></p><p>- <a href="https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html">Vitalik&#8217;s techno optimism</a></p><h3>Transcript</h3><p><strong>[00:00:13] Dan: </strong>All right. Today I have the pleasure of speaking with Vitalik Buterin. Vitalik, welcome.</p><p><strong>[00:00:18] Vitalik Buterin: </strong>Thank you so much, Dan. It's good to be here.</p><p><strong>[00:00:22] Dan: </strong>First question. If you were put in charge of improving a large governing body, so say like the EU, how do you think about the relative importance of fixing the governance mechanisms and laws versus the talent of the people making the decisions? Said another way, if you could only choose one to start, would you change the incentives to attract highly talented people to enter the government, and then leave them at the mercy of today's bureaucratic rules, or would you re-architect the way that the government actually works and leave the existing people in charge?</p><p><strong>[00:00:49] Vitalik: </strong>Wow, that's a tough question. It's definitely in that category of questions I haven't really thought about directly. I feel like I lean more on the people side. I think 10 years ago, I probably would have leaned more on institutions and incentive side. I think those things are important, but I think the other side of that is, basically, just that there have been so many attempts to try to create much better incentives. So much of the time it just ends up being either falling completely flat or just being okay and just being less impressive than people think.</p><p>I definitely feel more pessimistic about these improve everything by improving the incentives direction than I did, say, five years ago, even though I do still think that incentives are quite important, which I'm trying to think if I have a more detailed answer, because the EU was one of those in particular that I have less experience in understanding the deep internals of than other countries, just because I've spent relatively less of my time there. It's just the sort of thing where you don't absorb as much information about it instead of just by hanging out on the internet by default. It's just that it's harder for me to give amazing topic-specific answers about.</p><p><strong>[00:02:32] Dan: </strong>Got it. Got it. Okay. I have a couple of questions on talent. One of the things that's always made me really bullish on crypto is the perceived level of talent in the field. It's got this hugely ambitious vision, it attracts a lot of really, really super ambitious people. Now that AI has a lot of mind share among the same set of people that might be thinking about what to do with their careers, do you think the aggregate talent level entering crypto is higher or lower than it was three years ago?</p><p><strong>[00:02:59] Vitalik: </strong>That's a good question. I think that it's hard to give an estimate because there's also just a dig to the founder, which is the crypto space just keeps getting more and more mainstream with every passing gear. There's this unavoidable regression to the need effect. If you go all the way back 10 years ago, for example, the kinds of people that crypto were attracted were the kinds of people that were really in it for the ideals.</p><p>Often people that were quite still programmers are the sorts of people that would totally be able to earn a million dollars a year in some regular job. The problem is that they could totally not be motivated to actually stick to that regular job. They need something that they're passionate about. These are the people who are passionate about open-source software, freedom, empowerment. All kinds of technologies that are at the core crypto grid.</p><p>It reminds me of how there&#8217;s a lot of these developers like, <a href="https://en.bitcoin.it/wiki/Amir_Taaki">Amir Taaki</a> was one, he created libbitcoin Bitcoin which was the strat, and with one of the earliest alternative Bitcoin implementations. He ended up teaching me quite a bit at the time. He was the sort of person who at the time called himself an anarchist now he&#8217;s moved to democratic federalism. And he spent a lot of time in the names, I don't know, the Kurdish areas of the Middle East and was really impressed by studying that philosophy and the values there, and those sort of things that really drive him.</p><p>That's the sort of thing that I think crypto is still has, but it's relatively been diluted because the salary potential also exists. There's plenty of people who would rather earn 200k working on something that's actually meaningful in the space than earning two million just working on some other random casino that's just the same as all the other casinos. Yes, like solar, it still exists, but that demotion effect is still there.</p><p>I don't feel like I have seen too much impact from AI yet. If I had to guess what impacts it would have, I would guess there would be a sorting effect. The sorting effect that I have in mind is like the sorting between people who care about things like, I don't know, freedom and openness and privacy and our global networks, specifically, versus people who just care about generic cool stuff.</p><p>I think AI attracts a lot of the second, and AI relatively speaking attracts less of the first. The thing with crypto about five years ago is it attracts them both. When you have a space that attracts both, it can be hard to separate one from the other. It's when you're working with a person, it can be really important still to actually figure out which one of those it is, because even if it doesn't matter today, it matters in terms of what's going to motivate them five years from now, or it matters what's going to motivate them if, let's say, the political climate really changes the a lot or if the yes, incentives should shift in some way. That would be the thing that I predict.</p><p>Basically, this kind of divergence between this crypto values that are specific to crypto and the kind of people to find millennial cool things attractive in a more generic sense. I feel like there is a vary in high quality people in both camps.</p><p><strong>[00:07:01] Dan: </strong>Got it. Maybe talking specifically about what makes someone good in crypto. I heard you talk before about Danny Ryan who emerged as what you called a decentralized PM when Ethereum was moving to proof of stake. I think what you mentioned is he doesn't have any formal authority to actually order anyone around. Mostly what he is doing is talking to different client teams that are building the software with an ecosystem. What traits do you think makes someone a good fit for this sort of role over a more traditional PM at a tech startup?</p><p><strong>[00:07:32] Vitalik: </strong>It's a good question. I definitely feel like I understand the Danny Ryan role much better than I understand the traditional startup PM role, just because I ended up basically jumping from university straight into the crazy crypto world. I think if I had to give the best answer, I would say, first off, there's definitely a type of diplomacy skill that's required that's much more specific than just being able to tell people like, "This is what you're doing, and this is the thing that needs to be done." There's much more of a need to match people to the task and match teams to the tasks that they give, or actually want to do and would be excited about doing.</p><p>That's the thing that I think is important to keep in mind and actually functional organization, but the dial of how important that is goes way up to 11 in the crypto space, I feel. Just because when you don't have perfect control over what people do, you do have to rely on people's intrinsic motivations quite a bit more. Another thing is like, I think there's this much more public facing function that's important in the crypto space in general.</p><p>The way that I've thought about it is, sometimes, there's this idea that a company has a technical co-founder and a non-technical co-founder. The way that I would extend it in a way that I think realistically applies to anyone these days, but it applies even more so to crypto project, is like you have a technical co-founder, you have an organizational co-founder, and you have a meme lord.</p><p><strong>[00:09:36] Dan: </strong>[laughs]</p><p><strong>[00:09:36] Vitalik: </strong>Sometimes you have people, two of those who are sometimes even three of those. The meme lord thing is a role that you have to do and be part of and embrace. I feel like Danny has definitely also done quite a good job of stepping up into that too.</p><p><strong>[00:10:03] Dan: </strong>There's a lot of talk this year that if you're serious about AI, you need to be an SF. Sort of regardless of whether or not that's true, it does seem that like physically localized talent clusters have been a big part of the history of computing and startups. Ethereum and everyone who works on it is obviously like much more geographically dispersed. I'm curious if you think this is a feature or a bug for Ethereum. Then depending on your answer, what other industries that had success being localized have been even more successful if they were dispersed?</p><p><strong>[00:10:31] Vitalik: </strong>Yes, I think one factor that's very specific to crypto is just the fact that these are global networks that are not-- like a big part of their entire point is that they're not overly controlled by one country or even one cluster of companies. They are this unique thing that people from all different parts of the world are able to trust, even if they don't necessarily trust each other. That's not a thing that the centralized US tech sector is like really able to benefit from.</p><p>I think they're like just that one unique aspect of crypto to the extent to which it's like really still being one of the very few things that's international in this way at a time when increasingly almost nothing else is. I think that creates this extra motivation for the crypto space to try to be more dispersed and explicitly appeal to this much more dispersed community and attract them not much, not just as users, but as developers and just any kind of contributors.</p><p>I think putting that aside, in terms of just raw productivity, I think one of the things that's interesting is that it feels like the crypto space has recognized the importance and the value of in-person collaboration, but it's like developed to these totally bespoke crazy institutions to try to achieve that despite being what it is. One of these is the conference circuit. There's this thing where you have conferences happening somewhere pretty much every week and there's people that go to some of these at various frequencies. Sometimes this gets criticized as just wasting time and wasting resources. I think one of the really powerful benefits that it provides is the ability to have all of these different people actually get the benefit of this in-person time together while despite living in and coming from totally different parts of the world.</p><p>Then another example of this is within the Ethereum Foundation, we pretty regularly have these retreats. Basically, yes, like something like between 20 and 150, depending on size, people come together in a particular location for one week or two weeks. Often, but not always, it's scheduled to coincide with a conference, but regardless of whether it is or it isn't, people get a chance to talk with each other face-to-face, have this high bandwidth, very big picture discussion about things that are really important to coordinate on, get to be on the same page about, and then go back to remote world and actually execute on those things.</p><p>I think one interesting consequence of this is that one thing that is special about the crypto space to me is that there is this culture of collaboration between different companies. Sometimes also between companies and academic groups, sometimes it's between one company and other groups, sometimes it's between companies and the Ethereum Foundation. It feels like it really exists to a much greater extent than I see even in a lot of other places.</p><p>I think one of the reasons why is probably because like the structure of the in-person interaction is that people get lots of face time, not just with people who are working in the same place, but also with people working all across the ecosystem. The average person from Skrill has probably met a bunch of people from Polygon and an average person from Optimism has met a bunch of people from Arbitrum and so forth.</p><p>I don't have a super rigorous level of evidence for this, but I have a gut feeling in that it would be interesting at some points to explore more that like this aspect of not just companies having company retreats, but also having these more ecosystem wide things really contributes to this spirit of cooperativeness that exists.</p><p><strong>[00:15:24] Dan: </strong>Yes. On the conferences, this is like a big thing that's maybe a little bit even uniquely big in crypto. How do you think about choosing a conference to go to and what makes a really good one?</p><p><strong>[00:15:34] Vitalik: </strong>In the way that I choose is basically that I make a choice of which continent or which area I'll roughly be in at a particular time. Because it's like, I just can't be literally flying halfway across the world every week. Then once I've chosen that, then if there are things that look interesting enough in terms of like what kinds of people I can meet, what kinds of both local community people and just like global Ethereum people, what kinds of important things are going to be talked about there and so on. If an event looks important enough, then I go, otherwise I don't go. Then if something is in the wrong continent, then I just default pretty strongly towards not going to it.</p><p>I think in terms of what decides the quality of an event, one division there is like, is this a research and dev event or is this one of these big conferences? I think for research and dev events, it's definitely like just having the right people there. If you just put a hundred really people who are bright and who already are working on the important problems together and even if you just put them in some random hotel for a week, lots of amazing stuff is going to come out.</p><p>For some of these bigger events, in the crypto space, like the big divide is basically like, is this an interesting conference or is it a chill conference. I feel like every space has its chilly side to some extent, but that's definitely like a dynamic that gets amplified quite a bit in crypto. There's the question of like, is the primary vibe here, people trying to do interesting and meaningful things or is the primary vibe here like, "Hey, I'm going to tell you about this token whose price is going to go up by a factor of eight within three weeks"?</p><p>The first look very different from the second. They have very different crowds. They have different focuses. You can just tell pretty easily whether a particular event is more in one camp or more in the other camp. Then, of course, sometimes you have the thing that happens where a thing in the second camp tries to latch onto a thing in the first camp for legitimacy and you have to figure out to what extent that's going on and so forth.</p><p>I think from the point of view of someone who is not like very deep in research in dev circles, to me, the main value of a conference is not the presentations because you can always watch presentations on YouTube. To me, the main value of an event is like, can you have good side conversations and meet interesting people that you want to meet there? Sometimes having high-quality presentations is like almost a way of creating an advertisement for that sounds like, "Hey, yes, we do have actually interesting people here and so actually interesting people should feel welcome here."</p><p><strong>[00:19:06] Dan: </strong>Yes. A question on your priorities. I know you've talked about how your top priorities for Ethereum really haven't actually changed that much over time. It's like scalability, privacy, wallet security. I'm curious, what about your vision? Could you articulate your current view of why Ethereum is important for the world and talk about whether that vision has changed over time?</p><p><strong>[00:19:26] Vitalik: </strong>Yes. I think Ethereum and the broader crypto space are important because they let us create applications and tools that are very important for our community of collaborative functioning but in a way that doesn't require everyone to trust the same centralized intermediary. I think being able to do that is good because once you trust the centralized intermediary, then there's just lots of ways in which that relationship can slowly turn exploitative over the course of a decade or two decades.</p><p>There's lots of examples to point to for this. Even things like social media starting out being very user and developer friendly. Then turning into these closed things where they try really hard to stop you from doing anything except through their interface, then charging more, and then essentially becoming more and more exploitive over time. There's just the fact that there's a limit to what things people even are willing to trust and how reliable particular centralized actors are.</p><p>Especially as we've seen what things like money in the financial system, when it depends on centralized actors, then that ends up-- sometimes it ends up just completely failing. There's definitely lots of countries around the world where the centralized institutions just can't be trusted at all. Sometimes you end up excluding large parts of the world and basically only serving very normal western customers.</p><p>Sometimes it just ends up being very inefficient, often ends up being worse at privacy. One of the big trends that now we're seeing again is that, with the whole push to a cashless society, that's happening in a lot of places. Your ability to just do payments in a way that it doesn't give your information to third parties, which is a thing that we've had for thousands of years, is potentially very quickly disappearing, right?</p><p><strong>[00:21:55] Dan: </strong>Yes.</p><p><strong>[00:21:55] Vitalik: </strong>It's that intersection of applications that let people do things with each other, but without all being under the thumb of one big thing.</p><p><strong>[00:22:07] Dan: </strong>I guess what I'm really curious, though, if you were to answer that question five years ago, are there any core things that have changed to the vision?</p><p><strong>[00:22:15] Vitalik: </strong>I think a lot of things have become more specific. For example, five years ago, I was talking about money. I was talking about the thing that would eventually be called DeFi. I was talking about prediction markets. I was talking about ENS. I was not talking about NFTs. NFTs caught me totally by surprise, but I was talking about a lot of those things. Then, now I'm talking about similar things, but the things that I have to say about each individual category are much more detailed. For example, take decentralized social media for example.</p><p>You can talk about the virtue of making social media more decentralized in the abstract and say, "Hey, open-source software is good," or you could talk about it in the context of, "Hey, we just realized that all of these platforms are controlled by one company or one guy, and that one guy can suddenly be a totally different guy from who you were expecting. Things could very quickly change. People have a lot of specific examples of what they want to avoid, point to. Then the other thing is that there's specific projects, like Farcaster and ones are probably the best ones. They actually deliver on these ideals that both I and a lot of people have been blabbing about in pretty concrete ways.</p><p>If you have an account on Farcaster, then your username is actually an ENS name. It's plugged into this decentralized name system and Farcaster the company can't just go and take your name away, or another example is the whole vision that you have this separation between the content and the view. You have the content, which is just whatever posts that people decide to make, and then you have interfaces that let people actually see the content.</p><p>There's been all of these ideas that people have talked about in theory in how if you separate those two, then you can have different people creating views for the same content, and people can have experiment with different kinds of moderation and different kinds of algorithms and filtering. If you don't like one, then it's much more practical to switch to another. If you want to create your own, it's much more practical to do that because you don't have to also build up that network effect from scratch.</p><p>These are theoretical arguments that could have been made years ago, but then, you can actually see it in action. You can go to Farcast and you can see a Farcaster content, and okay, it's a decentralized Twitter. Then you go on-- I forget what it was called. It's not Flik, but the thing that makes Farcaster look like a Reddit, I think, Flink. You go on it, and you see the exact same content. It just looks like a Reddit.</p><p>You have these different ways of seeing the same thing, and you have actual UI competition within the same ecosystem. It's something really cool and powerful. You can actually see the benefits of that in a very real way. That's something that I would totally not have been able to point to about three years ago. There's a lot of that kind of specifics. Then, within the DeFi space, there's a much clearer separation in my mind between what kinds of DeFi projects are really interesting and what kinds of DeFi projects are just totally pointless. There's within the DAO space I think pretty similar.</p><p>One example of a thing that I really believe about DAOs more than I did before is, I think 20% of what's important about a DAO is the governance mechanism, 80% is the communication structure. If you have even the best governance mechanism, but then the ways that people actually communicate, start, and come together on decisions are just totally crappy, then the outcome is going to be totally crappy. Whereas, if you have a community that has the tools for that community to stay aligned on a communication level, then often, even governance structures that are theoretically super crappy can just keep on limping along much better than you can expect. Those would be some examples of my beliefs on specifics that have changed.</p><p>I think in terms of bigger picture stuff, I'm trying to think if I have very particular thoughts. Probably a thing that has also become less of an emphasis for me is this idea that you even can create a theoretically perfect governance mechanism. This is something that I feel like I was trying really hard to work on and create a mathematically provable, perfect governance mechanism.</p><p>Then at some point, I wrote those big, long blog posts that you might have read from 2019 or '20, where I just realized, "Hey, wait, especially with this whole collusion thing, there's just these fundamental impossibility results. You just have to look at things that totally get different axis of the problem if you want to get better results."</p><p>I think it's less about searching for utopia and more about just creating basic infrastructure so that some of these aspects of how we interact with each other technologically can just continue to stay reasonably open, free, and in ways where international cooperation just continues to be more possible. Preserve those things as much as possible through an era where just all kinds of stuff is going to change every couple of years from now, pretty much all the way up until the singularity.</p><p><strong>[00:28:51] Dan: </strong>A question on some of your intellectual influences. I know that you've mentioned before when you were first getting into Bitcoin, you went through the libertarian cannon, Mises, Ayn Rand, Hayek, everyone. Do you sympathize more or less with their ideas now than when you were first getting into Bitcoin and ultimately creating Ethereum?</p><p><strong>[00:29:08] Vitalik: </strong>The thing that I definitely sympathize less with is the idea that any of those things are complete philosophies. It's like how a band has the whole A equals A thing, where you're supposed to start with A equals A. Then it could go all the way to, and this is Y taxation is theft.</p><p><strong>[00:29:31] Dan: </strong>[laughs]</p><p><strong>[00:29:32] Vitalik: </strong>Look, I definitely have a default suspicion towards any argument of the forum. Like, "Hey, things should be organized totally differently. I have this one line of mathematical logic to prove it." There's a difference between believing in a principle as this statement of propositional logic that's supposed to always hold because it holds by definition versus believing a principle in the sense of like, "Oh, this is neuron number 3974 in my brain's internal neural network, and I noticed that neuron being activated more highly correlates to things that I like, and so I'm going to push for things that activate that neuron more, and push against things that deactivate that neuron more.</p><p>That more fuzzy approach to thinking about principles, like one that treats our brains as the idea space that we work with less like a formal logic problem and more like LLMs, is something that definitely appeals to me more. Then you can also go down into the specifics, and there's a lot about those thinkers that those ideas make sense as personal motivation and as like deep social insights, even if taking them too literally would just turn the world into total hell.</p><p>There's the famous quote that Ayn Rand's heroes are just a completely crazy and impractical supermen, but her villains are totally realistic. Yes, and I think there are ways in which, I personally-- look, even when I was a teenager reading <em>Atlas Shrugged</em>, I found it deeply motivational. The underlying message that, if you feel terrible, then you should not try to find ways to blame that on society or the world being unfair or systemic and structural badness or whatever. You should just blame it on yourself and focus on how can you change?</p><p>That's a lesson that I took to heart then. I feel like <em>Atlas Shrugged</em> did actually contribute to my ability to take that lesson to heart. I feel like it was actually the right lesson for me at the time. Yes, and I think, there's good nuggets of personal philosophy in there in that way. I think, similarly, there's good nuggets of social philosophy in those things. The failure starts to really come when you start taking too seriously the idea that these people are providing complete systems that you're supposed to buy into in their entirety.</p><p><strong>[00:32:49] Dan: </strong>Yes. If there was a really bright 13, 14-year-old who wants to get into crypto, would you still have them familiarize themselves with the classical libertarians or would you have them start somewhere else?</p><p><strong>[00:33:02] Vitalik: </strong>That is a good question. I think it's interesting because I feel like over the last 10 years, this is one of the sad things, that's I feel like we've been entering a much less principled age, in the sense that it feels like an indisputable fact that Rothbard and Mises and Hayek and Rand and all of these very deep and systematic intellectuals are just less influential. They have much less of a political camp that is deeply guided by them than they did 10 years ago.</p><p>Then you ask, what replaced them? The thing that seems to have replaced them is, like Andrew Tate and the Bronze Age mindset or something. I read these and I just go, like, "WTF, mate?" These are, again, they're things where I'm sure that there's people in particular psychological situations at particular times in their upbringing where it makes total sense for them to receive certain messages. As a just bedrocks of an intellectual vision, they're just, again, totally just random chaos that would--</p><p>Yes, there's this question of, is that a trend that you'd want to fight against and go back to older stuff or are there structural reasons why the older stuff stopped being relevant? It's like, you can't go back to the way that things were, and the only way out is through. Yes, the biggest structural thing that happens, obviously, is just-- I think there's two. One is just technology and social media. The other is this big, geopolitical fact that this post-1990s, sort of post-USSR collapse equilibrium where both the US and a whole bunch of ideas are extremely prominent, proved to be a very temporary thing, and it ended up collapsing.</p><p>Neither technology nor US hyper-primacy are coming back anytime soon. Unless, of course, some totally curveball-y thing happens with AI that really does bring back AI hyper-primacy or US hyper-primacy or something. So far, that's not the default trajectory. You have to adjust to those realities. I think the thing that I would recommend now to someone is definitely to read things from different perspectives.</p><p>I still think that the early 2000s, rationalist canon is super good. The 2010s, the Slate Star Codex canon and all of his posts, especially about the nature of tribalism and the-- especially the idea that you're supposed to, if you want to be tolerant, you have to tolerate things that you personally actually dislike, as opposed to tolerating just things that mainstream society is in favor of and you're in favor of anyway.</p><p>There's a lot of these fascinating, deep philosophical nuggets in there. Then on the specifically crypto anarchist and just broader crypto side, there is the original manifesto, the original white paper, a lot of those more specific canons. Phillip Rogaway's The Moral Character of Cryptographic Work is a good one. There's just this broad collection of things that I think are still really good and really valuable reads. I definitely support people reading that.</p><p><strong>[00:37:20] Dan: </strong>Yes. I'm curious what you think, though, because, I find it really interesting that you mentioned that people moved from core libertarian philosophers over to an Andrew Tate or a BAP or something, right?</p><p><strong>[00:37:35] Vitalik: </strong>Right.</p><p><strong>[00:37:36] Dan: </strong>I find it interesting that even Slate Star Codex and the rationalist community, Scott Alexander's got his really famous posts on the anti-neo-reactionary FAQ. People sometimes say that's one of the best introductions to this strain of thought. It does actually seem like it's a little bit seductive for people that start, as a libertarian, they sort of move that way. To me, the core difference seems like it's much more concerned with culture than liberty. I'm curious just what you think is causing that shift and why specifically libertarians tend to get really interested in it and abandon some of the earlier, whatever, gray tribe type stuff you might call.</p><p><strong>[00:38:19] Vitalik: </strong>Yes, it's a good question. I'm going to see how would I analyze this. Sometimes I analyze these things by taking my own brain and dialing up to infinity the neurons I have that find those things appealing. On the one hand, it subsets. This is the literal definition of apathy, and it's supposed to be good. Then at other times, people have told me that this ends up totally misreading people and being overly charitable. I'll try it anyway.</p><p>One aspect of it is, this is the thing that Balaji talks about all the time, that the main axis of politics definitely has shifted from being about economics to being about cultural issues. I think it's easy to find examples of this. If you remember back in 2010, the big issue was Obamacare. Obamacare is about healthcare economics. If you want to make an argument against it, then you're going to be saying things like, "Oh, people should be free to choose whether or not to have insurance because people have the best understanding of their situation and their motivations."</p><p>He wants to put them in a position of power over things that are core to their own health. Those are economic arguments. There's definitely this very mathematically appealing aspect to those economic arguments. Those economic arguments do generally-- I mean end up going in a particular direction, which, if you're a particular personality type, it definitely is the libertarian direction.</p><p>Then if you start analyzing culture, one of the problems is that culture is just much harder to analyze in that way, right? It's a very anti-inductive thing in the sense that the existence of a theory about culture often itself is like a thing that can kill the validity of that theory. It's harder to analyze. I think cultural issues are also inherently more zero-sum, because if you focus on the economics, then, okay, great, we can all have, if we had better institutions, like bigger houses and better health care and cleaner air at the same time, because you can push frontiers forward, and here's these math equations that show that if you align incentives correctly, that actually happens.</p><p>Then with culture, there's these deep vibe-level preferences that each of us have that are definitely, again, much less examined with the same intellectual tools. The kinds of conclusions that you would come to if you just start, I don't know, thinking about culture end up being very different. I do think that in terms of is this switch to a focus on culture good or bad? I think I would say the answer is it has one very understandable part, but also it does have these very bad parts to it.</p><p>The understandable part to me is, if you think about it just from the perspective of your personal finances, if you personally had to choose between option A, which is, let's say, the life and the job that you currently have now, and you earn, let's say, $250K a year, and then option B is you work at some soulless multinational bank, and your job is to help one company win weird zero-sum competitions against other companies, and these are just totally random corps selling boats to rich people, but you earn $2 million a year. Which one of those would you personally rather have? Yes, I think for a lot of people the answer is the first one.</p><p>Then if you ask the question, would you rather have the fulfilling job in $2,500 a year versus the unfulfilling job in $20,000 a year, then it suddenly becomes very different. I think a lot of people would probably go for the $20K. Once your economic situation is good enough, then it makes sense to reallocate more of your caring toward culture. It's definitely true that libertarian literature just has less to say about culture in part because all of these economic modeling tools that it relies on just don't really fit well to that domain. That's the understandable part.</p><p>Then the place where I think this becomes pathological is, one is, there's the whole luxury beliefs argument, which is that if you're a coastal rich kid, then you're at the level where that fulfillment matters more. Then if you start even subconsciously nudging your entire country's politics in directions that optimize for that, then you're screwing over a whole bunch of much more poor people in your own country and possibly, yes, even more poor people in other countries that by polluting their intellectual sphere when what they really need is to continue caring about the economics.</p><p>Then there's just the whole short-term, long-term thing where good economics tends to be even better and work out even better in the long term than in the short term. Culture is like the tools that we have to deal with that don't seem to really work in that long-term way yet. Let's see, how would I think about this? Think about the new reactionaries again is they're a very, definitely an internally diverse group in some ways, because I'm just trying to think about the mold bug posts that I've read, very few of them seem to actually focus on culture. They seem to focus on the efficiency of monarchical rule and how the bad guys in historical conflict number 37 were actually the good guys and all of this stuff.</p><p>Then there's definitely lots of people who actually do focus on culture. Yes, so I think my analysis for why it's happening, probably, yes, if I had to name one factor that could be the biggest factor, it probably is that the combination of that's what you care about more once you're wealthier and society has adapted to that. We just know that even from people's personal preferences.</p><p>Then there's the Internet, social media aspect of things, which is basically that the optics has become a much more, has been increasing in importance with pretty much every passing decade starting from basically the start of the Industrial Revolution. The extent to which you can actually get better results in terms of accomplishing your goals or get worse results in terms of accomplishing your goals by just having good optics versus bad optics in this public space is, I think that was small 200 years ago, bigger 100 years ago, bigger 50 years ago. It's just crazy big now.</p><p>I think it's one of those realities that you have to analyze. I think it's one of those realities that rationalists and libertarians are definitely somewhat, find it somewhat discomforting. It feels like sometimes there is a parallel between this discomfort about the inevitability of optics and vibes mattering among that crowd versus the discomfort about the inevitability of incentives mattering among communists. That's probably maximally inflammatory way to put it, but I'm arguing against my own camp, so I'm allowed to.</p><p>I guess to summarize, there's like all of the like-- there are these structural reasons that start paying attention to culture more. I think that's unavoidable. I feel like our intellectual culture hasn't really yet found the ways to think about those issues that's healthy and leads to good outcomes yet. That's the thing that I really hope we can try to improve on.</p><p><strong>[00:47:26] Dan: </strong>Got it. To what degree do you think Ethereum and crypto more broadly is a cultural innovation versus a technological one?</p><p><strong>[00:47:35] Vitalik: </strong>I think huge, and I think the easiest way to see why it's huge is by comparing between different cryptocurrencies and different blockchains. If you ask people, why are you in Ethereum versus being in Solana versus being in Bitcoin, a lot of the time it is not the technical differences between those platforms that is the reason why. It's about the underlying values that those communities have. The fact that the Solana blockchain currently has this number of validators and it requires this number of terabytes to run is almost less important than, for example, the fact that there seem to be dApp developers that are fully comfortable with releasing dApps where their contracts are closed source.</p><p>That second thing really offends people culturally. It's like that cultural offence almost matters more than the technical properties of those platforms. Yes, so I think comparing between the blockchains, it's, I think, really easy to see the really large extents to which these are cultural platforms, even more than they are technologies. To me, that makes sense, especially because, even from the point of view of someone who would think that technology is ultimately what matters, like even if, let's say, you believe that because you think ultimately these things are going to go mainstream, and so the culture is going to regress to the mean and the technology is what stays. Which is like a very good argument.</p><p>Even still, I think up until we get to that point, the culture determines the derivative of the technology. It determines the direction the technology is going. If you have a culture that values decentralization, then you know that centralization problems are going to have people that work hard to try to fix them.</p><p>On the other hand, if the culture doesn't value that, then the problems are not going to be fixed. Another example of this is like sometimes the intellectual culture of Bitcoin maximalism, for example, like you can easily feel how it's an intellectual culture that is around justifying a thing that already exists and that is already fixed. If a thing is this way, then the engine of the brain is targeted toward creating sequences of characters that make tokens, I guess, that's a new way to call them, that make where once you hear those sequences of tokens, you feel good about the status quo, and the status quo feels correct.</p><p>Whereas, I think, Ethereum, relatively speaking, has less of that, and the intellectual culture is more about helping, using ideas as a way of deciding, well, how are we actually going to change the ecosystem, either what goes in the next protocol hard fork or what goes into things on top like ERC-4337 or ZK standards or whatever.</p><p>If you think about statements like privacy on Ethereum sucks, that's a thing that lots of people in Ethereum freely say, and the reason why they're free to say it without that feeling like a personal attack on their identity is because their identity is not built around defending the status quo of Ethereum as it exists today. Their identity is built around the vision that Ethereum is driving towards, and the vision that Ethereum is driving towards already has five different teams that are working on some really, yes, "powerful privacy stuff." Yes, and I think that's the way in which culture has a really huge impact in all of these systems already.</p><p><strong>[00:51:35] Dan: </strong>My takeaway from these last few questions is that it's just culture all the way down. [chuckles] Presumably, CEOs of public companies have a reasonable grasp of what it will take to move their stock price. Yes, obviously, there's macroeconomic factors that are out of their control, but at the end of the day, they increase revenues by X percent, they keep margins at Y percent, they can make some good assumptions about where the stock price is going.</p><p>Now, I totally recognize Ethereum is not an equity, but I am curious for you, have you been able to say, if these specific set of events happen, I'm confident the price will do X, or do you even find it yourself hard to understand the forces affecting the price?</p><p><strong>[00:52:13] Vitalik: </strong>I definitely find it hard to understand the forces. Even just to this recent rise that we've seen, why did it break out this time and not the last few times that it briefly went over $2,000? I have no idea. Why did the SBDC rate drop from 0.07 to 0.055 at this time, but then it didn't drop at that other time? I have no idea. Why did it go up from 0.03 to 0.07 in the first place? I can come up with a story in my head about why that's a pretty natural reaction to an overcorrection, but ultimately, I have no idea. I think my brain definitely does treat them as these weird demonic forces, and there's an aspect to which you just have to accept it as it is and go along for the ride and just do what you can to make the ecosystem healthy without thinking about how that affects the price too directly.</p><p><strong>[00:53:20] Dan: </strong>Got it. Let's talk a little bit about AI. You have a really great post that came out very recently about what you call defensive accelerationism, or DIAC. I'm going to ask some questions about that post, and I'll link to it in the show notes and recommend everybody go read it. First one that came to me on language. You famously learned Chinese pretty quickly and know a bunch of languages. Do you think the ROI of learning a new language will still be worth it even if AI reduces all of the friction to speaking with someone in a different language?</p><p><strong>[00:53:51] Vitalik: </strong>I think ROI of learning languages has definitely decreased from where it was 10 or 20 years ago by quite a bit. I think it always ends up taking longer than people expect to actually decrease in response to some new technology, but it is definitely lower. My advice at this point is, if you're in the US, then try, Chinese is good, Spanish is good, and, probably, yes, stick to those if you're the type that likes challenge and, otherwise, focus on other things.</p><p>Five years ago, my advice would definitely have been more ambitious. I'd say, yes, my hope is, and this gets into some of the ideas that I wrote here at the bottom of that post is, at the same time as technology makes it easier for AIs to do things, technology could make it easier for humans to either do things or learn to do things, and so I've been hoping that we're going to get magically, significantly better learning environments for whether it's language or whatever else.</p><p>I think language is just an easy test case because it's so easy to define what the thing that you're learning is, and it's so easy to, relatively speaking, measure proficiency. It does feel like language learning gets a few percent easier every year than with every previous year, though, the slope of that is definitely quite a bit lower than the slope of how quickly pure AI is getting better.</p><p><strong>[00:55:52] Dan: </strong>I guess one other way of phrasing this, actually, is, if literally the cost of talking to anyone in any other language goes to zero, then the effect of--</p><p><strong>[00:55:59] Vitalik: </strong>Then you should just focus your attention on other things.</p><p><strong>[00:56:03] Dan: </strong>Yes, it becomes learning Latin or something, right?</p><p><strong>[00:56:06] Vitalik: </strong>Right, exactly.</p><p><strong>[00:56:05] Dan: </strong>I guess my question was just, from the experience that you've had, is it still worth going to learn Latin or some other language just for the sake of learning? Have you gained any non-communication benefits from it?</p><p><strong>[00:56:16] Vitalik: </strong>Yes, it's a good question. Yes, I think there definitely is value from learning one or two that's other than raw communication ability, which is just, it gives you a much better grounding to understand how language works in general. It gives you better grounding to understand, to be able to think about a topic without English language cultural associations immediately seeping in. There definitely are these other forms of value that exist, but yes, it definitely becomes much as smaller than in a world where you don't just have magic instant translators everywhere.</p><p><strong>[00:57:06] Dan: </strong>Got it. You summarize your DIAC post at the end, and one of your calls to action is that you should build and build profitable things, but be much more selective and intentional in making sure you are building things that help you and humanity thrive. My question to you is, how do we practically incentivize this, and is that actually really different, or how is it different from people who are calling for AI regulation? It seems to me that's sort of what they claim they're trying to achieve. What's the nuance there that makes your call to action different?</p><p><strong>[00:57:39] Vitalik: </strong>Yes, it's what are the tools that we can use to actually achieve that differentiation, I think, is definitely one of the most important questions here. I think one of my answers is definitely that we just need much better forms of funding for some of these other alternatives. We need to identify much better ways, for example, the old problem of, how do you incentivize better open-source software to get created.</p><p>That's been a problem that the open-source software space has been really struggling with for a long time, and that's something where you can definitely extend that question and also ask, well, if direction X is more risky and direction Y is less risky, and it looks like direction Y is just insanely underpowered, then what can we actually do to accelerate that direction Y more?</p><p>The tools to do that are-- they're definitely much more expansive than just government action. One example of this is that if we decide that, let's say, brain-computer interfaces and eventually uploading is a better path to superintelligence than by making bigger and bigger supermodels, then the problem is, the supermodel space is a space that is accelerating very quickly. There's a lot of billions of dollars going into it, and meanwhile, in this other brain-computer interface and interacting with and understanding and dealing with the human brain space, there's a much smaller amount of funding going in.</p><p>You could try to solve that problem, or you could try to decrease regulation in that area, or you could try to rather get developers and builders actually more excited about working in that area. I think that acceleration side of things is one big aspect. I think the interesting thing about this is that there is a lot of these spaces that really could be significantly improved with even relatively small amounts of funding. Even like low hundreds of millions of dollars, and we could have like much better pandemic prevention. We could have a much better brain-computer interface space. We could have much better secure operating systems.</p><p>These are areas where like that level of resources is just not going into them yet because there just is so little incentive. That's a thing that is quite easy for people to get in and, if they have resources, go and start focusing on them. The challenge I see with a lot of these regulatory approaches is like, the thing that I think the traditional effective altruist approach is the least good at thinking through properly is the question of like, is your impact even going to be on the right side of zero. I think one of the reasons behind that weakness is that if you look at old school AI, like malaria nets, deworming the world, give directly, and all of those things, it's like hard to imagine how those things could make the world worse.</p><p>The question is like, well, do these things improve the world at a rate of like one life saved for $4,000 or one life saved for $10,000? In either case, it doesn't really matter too much. You just like go and throw a bunch of your money into that thing. When you can be reasonably confident that your effects are on the positive side of zero, then there's like a whole set of intuitions that follow as a result of that like carefulness becomes one of the most harmful things in the universe because it creates delays, and delays are the invisible graveyard.</p><p>Then if you start going into politics, the challenge is like, politics is just a very anti-inductive game in so many ways. This is one of those examples where like this idea that if people are aware of a theory, that itself might make the theory less true. There's different versions of this. Sometimes if people are aware that some people are acting according to that theory, that might also end up counteracting that theory.</p><p>For example, imagine two possible worlds, one world where SBF was an effective altruist and is what he is, and another world where SBF is equally scammy, equally narcissistic and attention seeking and all of these things, but he's not an effective altruist. Let's say instead he is like, I don't know, a Chinese nationalist or like some totally random thing in the opposite direction. In which of those worlds would things be better from an effective altruist perspective? I think the answer totally is the one where SBF had never heard of effective altruism or thinks that it's totally stupid.</p><p>Yes, and one of the things that I think offended people about SBF the most deeply is like his forays into politics, how he started giving money to politicians. He started even actively campaigning in his direction in terms of how crypto should be regulated, which is like really not aligned with the direction of the rest of the ecosystem and all of these different things. A lot of that ended up really massively turning people off. A lot of that ended up even harming, even causes that he likes and that we like in all kinds of ways.</p><p>I think the open AI situation is similar in that respect. Because in the open AI situation, what you basically have is just the raw optics of it, that you have five people who are totally unaccountable, where people have never heard of them. The people's first reaction is like, wait, what the hell is this team? What the hell even is this like weird corporate structure that gives power to these five people?</p><p>Then the first thing that people hear about this is like, these five people have killed the CEO, not figuratively, and are threatening to destroy this amazing company that hires all of these bright people and just making this AI thing that we know and love happen. That's people's first public impression of this entire governance structure. If you do that, then like, yes, people are going to hate effective altruism. People are going to hate AI safety. These are like very understandable in human reactions.</p><p>These are all concerns that become real once you start wading into this domain of politics and this domain of like influencing large-scale incentives and influencing behavior instead of just being in a corner and doing things yourself. It feels like people are totally not ready for that. I think this is one of those areas where something like an approach that isn't just about like, we have to pause, we have to regulate, we have to slow things down, but that actually goes and says like, what is an actually positive vision that a critical mass of both builders and the general public realistically can get behind and that does have a reasonable shot of like actually being viable.</p><p>Let's try to figure that out and get people excited about that. That just feels like a much better strategy than just focusing exclusively on the pause direction. Yes, that's from how I think about the difference from just like a public opinion and my perspective.</p><p><strong>[01:06:44] Dan: </strong>Got it. You've noted I think a couple of times that your P(doom) from AGI is around 0.1 or 10%. Is there any one thing that could like a specific thing that you could see happen that would shift your P(doom) from AGI to less than 1%?</p><p><strong>[01:06:59] Vitalik: </strong>Good question. Some very effective, very convincing rebuttal of the theoretical arguments for why AGI is uniquely dangerous would definitely do it. I can't tell you what form that rebuttal will take, because I feel like if I could even give that form that I'd be 90% of the way there to actually having the rebuttal itself. What other things? Obviously, us getting to even better than human level AGI and things continuing to coast reasonably normally.</p><p>I think a lot of the arguments for doom, they do hinge on this discontinuity that happens once the AI gets above human level versus being below human level. I think there's two discontinuities. One discontinuity is like this whole recursive self-improvement thing. The fact that the AI actually would be able to outsmart people and hide from people while rapidly copying itself in all of these things.</p><p>Then the other discontinuity is like, there's a class of possible worlds where it's very easy to get up to roughly human level, but you just get stuck at that point. Suddenly it takes much longer to actually get above that point. The way that that world would look like is basically a world where it turns out that it's very possible to replicate patterns of behavior in some abstract sense that already exists and that people have already been doing and like generalize those and automate them.</p><p>There's some planning capability that humans have, and especially planning in unexpected situations that just like for whatever very fundamental reason is just like a much harder thing to achieve. We just end up needing a lot more effort than expected to actually reach it. I feel like there's definitely the sprouts of evidence of something like that being true. I feel like at least to me, the progress, like I expected a AI progress in 2023 to be slower than it was in 2022, just in terms of like a, how do I feel about how much has changed since?</p><p>It feels like 2022 was like a big zero to one year. The biggest thing in 2023 is probably, it's like a catch-up year for the open-source ecosystem, which I think is great. As I've said in the post, the biggest non-doom risk that I am concerned about with AI is like the centralization risk.</p><p>The nice thing about the open-source AI space and especially all of these models that you can go and run on your laptop, which I love and I've totally played with, is if an AI breaks the world, it's going to be one of the big ones that's made by a big, big corporate military. It's not going to be a random guy on his or her laptop.</p><p>Then on the other side, it's if we want to reduce the extent to which this will just hyper centralize everything, then you need something that's an answer to the power that these big models provide and open source or open weights, I guess is the better way to put it, AI models running locally is the way to do it. What we've seen is it has been a catch-up year for the open weights model ecosystem. We haven't seen comparable leaps of, wow, amazing progress in the same way that we did in 2022.</p><p>It feels like there's more and more people starting to say that LLMs are good at replicating patterns of things that have been done many times. They're much, much less good at extrapolating and creating and thinking through fundamentally new categories of things. It could easily be that this is just the limitation of LLMs and there's one or two more technological breakthroughs that we just need to break through before we get to superhuman AI.</p><p>Then the question is, in that world, how long do those breakthroughs take? It could be five years, or it could be 50 years. The longer that timeline is, the lower my P(doom) is. If we're three years from now, it feels we're entering an LLM plateau and there's nothing even more dazzling around the corner, then my P(doom) would probably drop. It would not drop to under 1%, but it would definitely drop by-- It could be two or three percentage points or so.</p><p><strong>[01:12:17] Dan: </strong>One part of the post is you seem fairly optimistic about brain computer interfaces, at least if they're possible and useful like a path to saving ourselves and making sure AGI goes well. I'm curious of your views, people like Robin Hanson, they expect human descendants to be super weird and very different from us. He's got that book, <em>The Age of Em</em> that explicitly outlines a vision for this. How important do you personally think it is that we conserve at least some of the things that we value today, well into the future, with our descendants?</p><p><strong>[01:12:53] Vitalik: </strong>I definitely think it's important. I get the desire to say, "Oh, we should be open-minded, and if this is the next stage of human evolution, then we should embrace it because if we don't and we stick to present day values, are we not committing to the same sin that the old curmudgeons who dislike homosexuality are committing?" for example. [chuckles] I get that desire.</p><p>There's a difference between things that happen within the distribution of human behavior and things that are way outside of the distribution. If you think about <em>The Age of Em</em> world, as he describes it, then one of the risks is that competition is basically going to compete away consciousness. At some point Ems are going to stop being conscious. To me that would be terrible and that would be an example of something that I would not want to see about our world in the future.</p><p>Even still, the thing with Robin Hanson's world to me from the perspectives of-- It's a world where things could easily be much worse. If you think about it, it's not a world where one super intelligent AI kills everyone. It's also not a world where we have a hyper decentralized world government. It's not a world where humans are pets. It's a world where humans have a path to continuing to be meaningful, frontier actors in the future of our galaxy.</p><p>It's like a world that really does actually manage to evade lots of dystopias. At the same time, again, definitely, has its risk, which is basically as Robin describes it, the whole set of problems of the Malthusian world is just coming right back with a vengeance. It's not perfect. That's the default passion. There's definitely a lot of people that would breathe a sigh of relief.</p><p><strong>[01:15:13] Dan: </strong>Have you been tempted to focus more of your time on AI just given some of the effort you put into that post and thinking about it a lot lately?</p><p><strong>[01:15:20] Vitalik: </strong>I definitely feel the need to just make sure I understand the space. Part of understanding the space properly definitely is becoming an actual user. One example of this is I got this insight recently that a current AI drawing tools are excellent at making an image if your goal is to make something that dazzles people. If your goal is to make something specific that you want for a particular purpose, then they're just terrible.</p><p>You have to fight them and do 20 rounds of in painting and do all kinds of stuff, you fumble around, let's call it, AI so you don't understand. It gets much harder. That's also a thing that I noticed that was true with the GPTs as well. The time when the GPTs got good enough to seriously impress me is way before the time when the GPTs got good enough that I felt comfortable using them and trusting their output.</p><p>That's an insight that I only could have gotten by being an actual user. I definitely have been playing around with these things. I have this Python script open where I try to actually run some of these diffusion models locally and see if I can use them to draw things and basically see to what extent I can actually do things without shipping all of the data about what I'm doing to large corporations and all of that.</p><p>Doing my part to stay up-to-date with both that space and with the AI space itself and with people's concerns about safety and alignments and what people are who are working on those issues are thinking about and worrying about. I definitely think it's important. Though I'm definitely at the same time not doing the stereotypical thing of quitting crypto for AI or whatever. I just don't think that makes sense for me as a person to do.</p><p><strong>[01:17:34] Dan: </strong>Last question here. If Ethereum succeeds in your version of its mission and the vision that you have for over the next 10 years, what would be the single most likely causal factor?</p><p><strong>[01:17:44] Vitalik: </strong>Single most likely causal factor in Ethereum succeeding? I just have to say continuation of trends. Basically the same underlying reasons that made adoption and interest keep increasing over the last 10 years just end up continuing for another 10.</p><p><strong>[01:18:07] Dan: </strong>Would you say it's more about risk mitigation than it is about like any specific change?</p><p><strong>[01:18:14] Vitalik: </strong>One aspect is risk mitigation. Another aspect is making sure that usability and scalability actually are there in time for when a much larger group of people wants to actually use the check because if the technology isn't there, then the next bull market is going to be a disaster for Ethereum. Transaction fees are going to go up to $500 and people are going to go back to hating crypto again. Scalability and usability do need to be solved. The good news is that we are much further along now than we were a year or two at solving them.</p><p><strong>[01:18:49] Dan: </strong>That's a great question to end on, Vitalik. Thank you so much for your time.</p><p><strong>[01:18:51] Vitalik: </strong>Thank you too, Dan.</p>]]></content:encoded></item><item><title><![CDATA[Adam Mastroianni]]></title><description><![CDATA[Watch now | Humor, peer review, and the future of science]]></description><link>https://www.danschulz.co/p/adam-mastroianni</link><guid isPermaLink="false">https://www.danschulz.co/p/adam-mastroianni</guid><dc:creator><![CDATA[Dan]]></dc:creator><pubDate>Fri, 08 Dec 2023 12:39:37 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/139530885/9d000cec088a4e32477e6bfe7025a35e.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<div id="youtube2-B4QvRZ5-4ZU" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;B4QvRZ5-4ZU&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/B4QvRZ5-4ZU?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8aa15857d4e8fd9f4448df6dcd&quot;,&quot;title&quot;:&quot;Adam Mastroianni&quot;,&quot;subtitle&quot;:&quot;Dan Schulz&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/7v3YHfKsUvRHxHQfT8HiHb&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/7v3YHfKsUvRHxHQfT8HiHb" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/undertone/id1693303954?i=1000637751391&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000637751391.jpg&quot;,&quot;title&quot;:&quot;Adam Mastroianni&quot;,&quot;podcastTitle&quot;:&quot;Undertone&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:4315000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/adam-mastroianni/id1693303954?i=1000637751391&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2023-12-07T02:11:38Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/undertone/id1693303954?i=1000637751391" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p><strong>Timestamps</strong></p><p>(0:00:00) Intro</p><p>(0:00:45) Is bias overrated?</p><p>(0:01:46) Researcher ambition</p><p>(0:05:52) Illusion of moral decline</p><p>(0:08:10) Morality over time</p><p>(0:11:33) Humor and teaching</p><p>(0:13:48) Misconceptions about humor</p><p>(0:16:48) Lost in translation</p><p>(0:21:07) Good research ideas</p><p>(0:27:37) Peer review</p><p>(0:29:51) Science House</p><p>(0:35:29) Identifying talent</p><p>(0:40:11) Adam's acting career</p><p>(0:49:59) Art blemishes</p><p>(0:52:10) What makes a good piece of writing?</p><p>(0:54:21) Adam's information diet</p><p>(0:55:10) Poorly defined problems</p><p>(0:58:40) Measures of intelligence</p><p>(1:01:35) The Milgram Study</p><p>(1:04:17) The replication crisis</p><p>(1:06:15) How many studies matter?</p><p>(1:10:19) The future of science</p><h3>Links</h3><ul><li><p><a href="https://www.experimental-history.com/">&#8288;Adam's Substack&#8288;</a></p></li><li><p><a href="https://twitter.com/a_m_mastroianni?lang=en">&#8288;Adam's Twitter&#8288;</a></p></li><li><p>Subscribe to the podcast on&nbsp;<a href="https://www.youtube.com/channel/UCtRjQ8EA3Via7KXzv9F173Q">&#8288;YouTube&#8288;</a>,&nbsp;<a href="https://podcasts.apple.com/us/podcast/the-world-of-yesterday/id1693303954">&#8288;Apple&#8288;</a>,&nbsp;<a href="https://open.spotify.com/show/59YkrYwjAgiKAVMNGWPaLE">&#8288;Spotify&#8288;</a>, or&nbsp;<a href="https://www.danschulz.co/">&#8288;Substack&#8288;</a>.</p></li><li><p>Follow&nbsp;<a href="https://twitter.com/dnschlz">&#8288;Dan on X&#8288;</a>.</p></li><li><p>Share anonymous feedback on the podcast here:&nbsp;<a href="https://forms.gle/w7LXMCXhdJ5Q2eB9A">&#8288;https://forms.gle/w7LXMCXhdJ5Q2eB9A</a></p></li></ul>]]></content:encoded></item><item><title><![CDATA[Scott Sumner]]></title><description><![CDATA[Watch now | Film, literature, and monetary policy]]></description><link>https://www.danschulz.co/p/scott-sumner</link><guid isPermaLink="false">https://www.danschulz.co/p/scott-sumner</guid><dc:creator><![CDATA[Dan]]></dc:creator><pubDate>Thu, 30 Nov 2023 20:47:04 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/139306817/c9ceab9ba57af7eadc903b827f92663f.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<div id="youtube2-b9Ek9njaCIk" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;b9Ek9njaCIk&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/b9Ek9njaCIk?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/the-world-of-yesterday/id1693303954?i=1000637079204&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000637079204.jpg&quot;,&quot;title&quot;:&quot;Scott Sumner&quot;,&quot;podcastTitle&quot;:&quot;The World of Yesterday&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:4365000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/scott-sumner/id1693303954?i=1000637079204&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2023-11-30T17:07:28Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/the-world-of-yesterday/id1693303954?i=1000637079204" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8ac4c2a87c0e1e904006926892&quot;,&quot;title&quot;:&quot;Scott Sumner&quot;,&quot;subtitle&quot;:&quot;Dan Schulz&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/11DYPfs97MRl0UWa5bMlP9&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/11DYPfs97MRl0UWa5bMlP9" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><h3>Timestamps</h3><p>(<a href="https://www.youtube.com/watch?v=b9Ek9njaCIk&amp;t=0s">0:00:00</a>) Intro</p><p>(<a href="https://www.youtube.com/watch?v=b9Ek9njaCIk&amp;t=33s">0:00:33</a>) Fiction for economists</p><p>(<a href="https://www.youtube.com/watch?v=b9Ek9njaCIk&amp;t=224s">0:03:44</a>) Knausgaard or Proust?</p><p>(<a href="https://www.youtube.com/watch?v=b9Ek9njaCIk&amp;t=424s">0:07:04</a>) TV or film?</p><p>(<a href="https://www.youtube.com/watch?v=b9Ek9njaCIk&amp;t=718s">0:11:58</a>) Joseph Conrad and Werner Herzog</p><p>(<a href="https://www.youtube.com/watch?v=b9Ek9njaCIk&amp;t=869s">0:14:29</a>) Underrated writers</p><p>(<a href="https://www.youtube.com/watch?v=b9Ek9njaCIk&amp;t=1007s">0:16:47</a>) Do pessimists make better art?</p><p>(<a href="https://www.youtube.com/watch?v=b9Ek9njaCIk&amp;t=1114s">0:18:34</a>) Cultural pessimism</p><p>(<a href="https://www.youtube.com/watch?v=b9Ek9njaCIk&amp;t=1503s">0:25:03</a>) Meaning in jobs</p><p>(<a href="https://www.youtube.com/watch?v=b9Ek9njaCIk&amp;t=1734s">0:28:54</a>) Behavioral vs classical economics</p><p>(<a href="https://www.youtube.com/watch?v=b9Ek9njaCIk&amp;t=2056s">0:34:16</a>) Are bubbles real?</p><p>(<a href="https://www.youtube.com/watch?v=b9Ek9njaCIk&amp;t=2393s">0:39:53</a>) Nominal GDP</p><p>(<a href="https://www.youtube.com/watch?v=b9Ek9njaCIk&amp;t=3389s">0:56:29</a>) Relative importance of monetary policy</p><p>(<a href="https://www.youtube.com/watch?v=b9Ek9njaCIk&amp;t=3543s">0:59:03</a>) Equilibrium interest rates</p><p>(<a href="https://www.youtube.com/watch?v=b9Ek9njaCIk&amp;t=3898s">1:04:58</a>) Technology and productivity</p><h3>Links</h3><ul><li><p><a href="https://www.themoneyillusion.com/">&#8288;Scott at TheMoneyIllusion&#8288;</a></p></li><li><p><a href="https://www.econlib.org/author/ssumner/">&#8288;Scott at EconLog&#8288;</a></p></li><li><p><a href="https://www.themoneyillusion.com/what-do-we-mean-by-meaning/">&#8288;Scott's "What do we mean by meaning?"&#8288;</a></p></li><li><p><a href="https://www.themoneyillusion.com/wallowing-in-nostalgia-an-autobiography/">&#8288;Scott's "Wallowing in nostalgia (an autobiography)"&#8288;</a></p></li><li><p><a href="https://www.themoneyillusion.com/the-strange-case-of-robert-louis-stevenson/">&#8288;Scott on Robert Louis Stevenson&#8288;</a></p></li><li><p><a href="https://www.themoneyillusion.com/what-ive-been-reading-2/">&#8288;Scott on Joseph Conrad&#8288;</a></p></li><li><p><a href="https://www.themoneyillusion.com/short-intro-course-on-money/">&#8288;Scott's intro course on money</a></p></li></ul><h3>Transcript</h3><p><em>This transcript is AI-generated and likely contains many errors. I&#8217;m planning to clean transcripts to all of the episodes over time.</em></p><p>0:12 Dan</p><p>Okay, today I'm talking with the excellent Scott Sumner. He previously taught economics at Bentley University. He was the director of the program on monetary policy at the Mercatus Center, and he writes one of the most popular economics blogs that I highly recommend called The Money Illusion, and also posts over at EconLog. Scott, welcome.</p><p>0:30 Scott</p><p>Thanks for inviting me, Dan. Glad to be here.</p><p>0:33 Dan</p><p>So first question, can reading fiction and watching films make you a better economist?</p><p>0:38 Scott</p><p>Oh, you've started off with a very hard question. Possibly in some ways, I think just having a broader life experience in general, reading a lot of things outside of economics, traveling, doing all kinds of different things can make you a better economist. I think it's hard to really pin it down, but if you expose yourself to a wide range of things in the arts and humanities and science and travel and literature and so on, you get a deeper understanding of the world and you're less likely to have an overly narrow view of certain issues. I find some economists from my perspective do have an exceptionally narrow reductionist view of certain kinds of issues, but it's hard for me to be very specific on that because it's. It's hard to draw a direct connection between a novel I might have read and economics. I think I've actually maybe benefited more from reading little bits of philosophy and things like that that have kind of expanded my ability to think about things in a different way. So that's something I would point to. I just did a blog post at EconLog yesterday, I guess it was, called the Wittgenstein Test. And it was based on a quotation from Ludwig Wittgenstein. And I use that quotation as an epigraph for my book, The Money Illusion. And those kind of things, you know, philosophical ideas that allow you to kind of look at things in a different way, open your mind to different methodological approaches and so on. I found those to be quite useful. And so I don't think I'm as narrow in the way I think about issues as I was when I was younger.</p><p>2:31 Dan</p><p>Yeah, it's funny you say philosophers over maybe fiction specifically, like, I guess, when you look back at the beginning of economics, it was kind of like more philosophical and less, you know, now it's much more mathematically rigorous. But like, for example, David Hume, like wrote a whole bunch of essays on economics, and philosophy was probably his main thing.</p><p>2:49 Scott</p><p>Yeah, that's right. I think specifically what I'm referring to with philosophy is the whole question of methodology and, you know, how do we know what's true, what to believe. And I think when I was in school, when I was young, I wanted to gravitate towards one single approach, like this is the way you figure out what's true. And, you know, you test a hypothesis or whatever. And as I've gotten older, I've realized that there's just a lot of pieces of evidence we bring to bear on what we should believe. And it's a mixture of empirical data, theory, even things like just metaphors about the world can be useful in kind of helping you achieve a deeper understanding of economic issues. So I don't believe any longer that there's only one approach to Truth. So in that sense, I'm sort of eclectic in my approach to methodology, I would say.</p><p>3:44 Dan</p><p>If a reader had to choose one and you were to make a recommendation, would you tell them to go read Knausgaard or Proust?</p><p>3:50 Scott</p><p>Well, that's a tough question. I mean, Proust has stood the test of time. Knausgaard might be more interesting to people of this generation because we can relate more. I found I could relate a lot to his life and when I read his magnum opus, My Struggle, I kind of felt like it was the first time I really recognized that other people experience life the way I do. Like we see other people from the outside, right? So we don't know whether what's going on in their interior perspective matches what's going on in our mind or whatever. And when we read literature, I think we get closer to that. But, you know, Proust was so far back in time that he was living in a very different world, whereas Knausgaard was living in a world that although is a different country from what I grew up in, it was still roughly the same time period and so on. And he had such an ability to make you feel like you were, you know, experiencing what he experienced that it was the first time I really felt I read something where I could say, yes, that's what life is like. Even though his life was very different from mine, like he's a different personality and so on, just the way he experienced things was presented in a way that made me feel less like I'm alone in the universe. I think it's called solipsism, this view that like you're the only one and everybody else is just as you see them on the outside. Now of course I've always known that's silly, but you know still to really know something at an intuitive level is different from knowing it intellectually. And when you read Knausgaard, do you know it at an intuitive level? At least it was for me.</p><p>Proust, uh, I read for the first time in my life only about a year or two ago, and it's a wonderful book, you know, maybe the greatest novel ever. So, uh, I would encourage people to read either one.</p><p>5:49 Dan</p><p>So I, I read the first two volumes of my struggle, like last year, I had the exact same response you did. It was like more visceral than anything I've ever read before. Um, but I have not yet read Proust. So good to know. It's got good reviews from you.</p><p>6:03 Scott</p><p>Well, yeah, and I think if I read Proust when I was young, I wouldn't have liked it. I wouldn't have been able to really understand it in the way I did when I was older. Now, this is not necessarily true for everyone. There's some readers that can appreciate very sophisticated literature when they're young. But for me, I didn't really have a very good understanding of society. And, you know, his book is so much about the interaction of people in fairly complex social situations. which when i was young i was a loner i didn't really have very many friends so i just didn't have any understanding of society at that level and much of that book would have gone right over my head if i'd read it when i was young so i think you know someone just starting out might want to start with now scarred for that reason it might be easier at a young age to absorbed that kind of thing as compared to a very complex society and one from like the I don't know 1890s or whenever it takes place.</p><p>7:04 Dan</p><p>Do you think that the best TV shows can ever compare to the best films?</p><p>7:08 Scott</p><p>Not really. Well obviously that's very subjective i should say that i think tv and film are different types of media tv is mostly like a writer's medium and film is sort of a director's medium and film i think is more about visual images and tv is more about dialogue so it really depends on what you're looking for and I'm a very visually oriented person. I like the visual arts more than the other arts. I find it easier to understand the visual arts more than music or poetry or things like that. So for me film is more appealing because at a visual level it's just a more impressive art form than TV. no matter how good the quality TV is. The writing in TV is often excellent and I understand why people that are interested in dialogue and stories and writing, I understand why they love quality TV shows and why they're bored by a lot of classic films which are often very slow moving and work at a visual level rather than a dialogue level. So it is to some degree a matter of taste but for me the peaks of filmmaking are just higher than the peaks of quality tv.</p><p>8:18 Dan</p><p>How would you recommend someone that is more used to absorbing dialogue from television or more modern movies get into these types of movies? Someone who's preparing to watch Tarkovsky for the first time or something; how would you recommend they reorient themselves to prepare for it?</p><p>8:41 Scott</p><p>Right, so I would say two things on this. I think it's a mixture of nature and nurture, I guess would be the way I would put it. So I can't speak for everyone. I think that, you know, some people have what I might call a relatively balanced mind, where they can be equally good at absorbing material from different fields, you know, a Tyler Cowen, someone like that. For me, my mind is very unbalanced. I'm very strong in visuals and weaker in the linguistic side. So some things come very easy to me and some are very difficult. So I think it's partly innate. On the other hand, Making the effort to try to learn things does improve your appreciation over time. So there's a lot of films that I would not have appreciated when I was younger that just making the effort to watch a lot of classic films over time has sort of educated my mind to learn how to see what i'm watching or how to interpret or understand what i'm watching and i don't mean understand in this sort of verbal sense of explaining the intricacies of the plot but understand at a more intuitive level what the director is doing often in a way that you really can't put into words. So that's why it's so hard for me to explain, you know, you mentioned Tarkovsky. Well, what makes Stalker or Solaris or one of those films so great? It's really hard to put into words because he's working at a very visual level. There's very little dialogue in a lot of his films. So I think you can learn it to some extent, and it probably helps to start with things that are easier because they're more entertaining. So for instance, if you watch the classic films of Alfred Hitchcock, he's a very entertaining filmmaker, but also a great artist.</p><p>So it's a less painful way to learn about cinema than a very difficult, esoteric, you know, art film that comes out more recently. So, you know, people like Hitchcock, Kubrick's another one who makes films that are entertaining but also very artistic. and I think that those two directors work very strongly at a visual level especially so you probably learn more from watching their films about cinema than you'd learn from watching other great films like The Godfather or Casablanca which have a more traditional visual style so they don't have the sort of trademark style of the director to the extent that a Hitchcock film does or we'll take a modern example a Wes Anderson film like if you put a Wes Anderson film on within 30 seconds you know you're watching a Wes Anderson film. There's other directors where you don't really know who the director is. It's just, is this a good movie or not? Is it a good story? Does it have good actors? Like no one pays attention to who directed the latest James Bond movie or something like that typically. Right. So it's, but I think if you have a director with a really strong style like Hitchcock or Kubrick, who also makes films that are entertaining, it's, it's a way to learn about style.</p><p>11:59 Dan</p><p>Got it. The question for you, I've noticed that you mentioned you're both a fan. I love your posts on Joseph Conrad. You also talk a bit about liking Werner Herzog. And I want to quote one of the views that you've mentioned you believe Conrad holds and sort of your speculation about him, which is the theme that he has is that &#8220;the universe is cold and meaningless. Most people live by comforting illusions, fairy tales, but look for meaning anyway.&#8221; And Herzog carries a lot of these themes. He really kind of views nature as dark. What is it about both of these two, one of your favorite movie makers and one of your favorite writers and these themes that attracts you to them?</p><p>12:39 Scott</p><p>That's hard to say. Like I definitely, when I read Conrad and he's a writer I read when I was young, I just connected with his worldview in a lot of ways. And that may be because I already had similar views, or maybe he was sort of convincing me as I was reading, maybe some of each. So yeah, the whole question of meaning. I guess the way I look at things, meaning isn't really out there. It's something you sort of generate internally. And I think that's the way he looked at things as well. I can't say as much about Herzog because now I can't really remember the dialogue. With Herzog, I remember the visual images in his film more than the dialogue i i know he has a very mesmerizing narration to his documentaries you know his vocal style and so on but conrad is someone who i think looked at the world in a very clear-eyed way without a lot of illusions. I mean, there's a long time ago, so it's silly to bring up politics, but if you wanted to bring that up, you might say he didn't have a lot of illusions of either the left or the right. So his books can be considered a critique of both sides of the ideological spectrum. He saw through a lot of illusions that people had. And so I feel like, you know, I observe a lot of people with similar, uh, with sort of illusions that he thought they had. And maybe that comes from reading him, or maybe I was just predisposed to look at people that way. You know, I suppose at some level it's a non-religious view of the world. People that are deeply religious probably do believe the meaning is out there. And they're not just generating it internally, but they're looking for something that's really out there. That's fine, and that may be a better way to live in fact, but that's just not the way I look at the world.</p><p>14:29 Dan</p><p>Yeah, yeah. You noted that Robert Louis Stevenson is overlooked, even though he's praised by like Borges, Proust, Nabokov, all these writers said he was fantastic. How efficient do you think the opinion of modern popular art is?</p><p>14:43 Scott</p><p>Right, so I should say I'm not really qualified to say he's a great writer. I like Stevenson a lot, and the reason I believe he's a great writer is exactly what you said all these other great writers say he is. So I don't think my skill at literary criticism is strong enough where my opinion has much, you know, validity. But in terms of your question about why is he overlooked, I think it's because the kind of stories he told are kind of out of fashion. So he was writing popular stories. You know, you think of books like Treasure Island, which is viewed as a child's story, or even Kidnapped. And I think the writers that are taken more seriously today are those that deal with more, less sort of escapist literature, if you will. Now I don't think all his books are escapist by any means, you know, he did a number of different types of novels, but also he was such a good writer that what he did seemed almost effortless and like if I were comparing him to Conrad When you read Conrad, you have a feeling that he's really struggling to get the words out that convey what he's feeling very, very deeply. Whereas Stevenson just seems to write almost effortlessly. That's the impression I have. And so the one that's really struggling with, you know, deep emotional issues in some sense seems like a weightier you know more serious writer than someone who's effortlessly producing entertaining stories but obviously these great writers that really love stevenson see something in his skill that i can't really explain in words like i don't really see how to me it's all magic he creates this wonderful effect in his books i don't see how he does it but they're able to see what makes stevenson better than other writers of popular stories for you know teenage boys and so on and all I know is I see the effects, the effects it has on me as a reader.</p><p>16:47 Dan</p><p>So you've noted before that like really great writers, it seems like sometimes or oftentimes they're maybe mildly depressed and your speculation is maybe they're just seeing reality too clearly. Do you think optimists or really happy people can make good art or is pessimism going to typically be a better trait for someone who's trying to write or make films?</p><p>17:08 Scott</p><p>Yeah, that's another hard question. So, I'm sure there have been some great artists that were happy optimists. I mean, Stevenson probably, at some level. I don't know, that's a hard question. There's this concept of what used to be called melancholia, I think it's called depression today, that's throughout history been associated with intellectuals. in general, not just, you know, artists. And, you know, whether the association is correct or not, there's been a perception that people that are intellectuals are more inclined to be melancholic or depressed. Maybe that's because they have a harder time finding meaning because they see a lot of things as illusionary. You know, a lot of the things in life that provide the greatest meaning, I think, are things that are in some sense illusions, right? You think of a child on, I don't know, Christmas morning looking at the presents under the tree. So, for them, the world is kind of magical, right? And as you get older, you see the world more clearly. and you don't have the same kind of euphoric feeling maybe on Christmas morning that a child would have. They're seeing in some sense an illusionary world in their mind, right? On the other hand, you can still find meaning, but I think it becomes a little bit harder. And so maybe it's harder for an intellectual to see life as having meaning than it is for just an average person. I'm not sure.</p><p>18:43 Dan</p><p>So it seems like you're in some of your posts a bit of a cultural pessimist. You kind of talk about how after an art form has expressed its most potent ideas it sort of like runs out of steam and you know for painting you cite you know abstract expressionism is kind of like this final frontier where all visual representation is gone and it's all ideas. What, in your view, would need to change in society for us to start making more interesting or great art again? Or do you think it's a dead end and really there is an expiration date for each form of art?</p><p>19:17 Scott</p><p>Well, we're certainly not at the end. So here's one point I would make. Over time, the low-hanging fruit gets picked, right? So artists make the obvious masterpieces. And then other art, and this is not my theory, this is a theory that's been kicked around a lot, the anxiety of influence. Artists want to have their own style, they want to create their own style, right?</p><p>19:43 Dan</p><p>Yeah, Harold Bloom.</p><p>19:45 Scott</p><p>So it becomes harder and harder as more things have been done. And when technology opens up a new field, it's like people entering a new country where there's all this land that is waiting to be harvested. And over time, then the low-hanging fruit gets plucked and it becomes harder and harder so you have to find something new for artists to express themselves in in order to have the field continue to have energy right so when i was young like in music for instance popular music invented some new styles like you know rock and roll and and other styles in the 60s and there was a flourishing of new ideas that came out of that and then they kind of ran out of ideas and rock and roll became stale and then you know A lot of modern poetry and literature is more difficult than literature, you know, 19th century painting. Many people have trouble understanding abstract painting who like figurative painting. And film, I think, the classic period kind of probably peaked around, say, the 1970s or so, and then film became more and more difficult. Or even, you know, the 50s to the 70s in that period. And people say well why don't they make movies like the godfather anymore well that's been made it's sort of like saying well why doesn't someone go out and invent another light bulb edison invented it already you know you have to invent more difficult things now i mean edison invented i think dozens of consumer products as far as i can recall right in his lab but he was working with kind of an open field electricity had just been mastered and so there was like this opportunity to invent all these electronic appliances once they're invented you need some other fundamental underlying technology to get a new wave of invention you know we've had that in you know biotech computer chips and so on and i think the arts are kind of similar you have to have a new field open up so a lot of people think film was the most important art form of the 20th century and it's not because film is better than literature or symphonic music or anything like that it's that it was a relatively new art form in the 20th century so it opened up a lot of possibilities for invention whereas the novel and the symphony and so on had been around for a few hundred years and most of the great ideas or the easy great ideas had already played out.</p><p>I should say I don't want to appear too much of a pessimist there still are great movies being made and also great novels even in the 21st century I've read a lot of really excellent novels so I mean there's such complex art forms both the novel and film you can do so many things with it right that And I may be wrong, but just thinking off the top of my head, they seem to me much more complex than say, sculpture, right? Like how many ways can you do a sculpture of a human figure? It seems sort of limited, the possibilities, but it seems like there's just so many ways to do a film and so many ways to write a novel that even if some of the low hanging fruit has been plucked, it's certainly not, they're not dead art forms by any means.</p><p>23:34 Dan</p><p>Yeah, yeah. I mean, I think Tyler has a post talking about how, in his view anyways, you look at some of the writers of the last 20 years, like early 21st century, you have Sebald, Bola&#241;o, Ferrante, I think his claim is like, these aren't actually so far behind the 20th century or 19th century greats.</p><p>23:55 Scott</p><p>Exactly, right. No, I think there's been a number. And you mentioned Overlooked. I'm reading a guy named Gene Wolfe.</p><p>24:04 Dan</p><p>Oh, yeah.</p><p>24:04 Scott</p><p>And he seems overlooked to me. I don't actually read much science fiction. But in reading his novels, they just seem to be at a much higher level in terms of, you know, literary artistry or whatever, than any of the other science fiction I've read. So I'm not quite sure why he's not more well known. But yeah, Bolano and Max Sebald and all those people. I love those writers.</p><p>24:30 Dan</p><p>I was gonna say Gene Wolfe has like a cult following, but it seems like it doesn't quite fit into tradition. He's like too literary for the typical like hard sci-fi. And then for whatever reason, like literary people don't take sci-fi seriously.</p><p>24:43 Scott</p><p>Yeah, I think that's right. He sort of falls in between the two.&nbsp;</p><p>And I think both sides should take more seriously. yeah but he's a little bit some of his books are a little bit difficult so if you're used to just very straightforward narrative that's easy to follow it could be frustrating to read him but yeah he's a wonderful writer</p><p>25:04 Dan</p><p>In your post on retirement, you note that some of the biggest mistakes in your life were doing things for money, like being a landlord or writing a principles of economics textbook. Assuming there are many other people out there like you who feel the same way, do you think that it's possible that the U.S. is approaching a post-scarcity economy where we'll see a lower supply of people who are willing to do jobs that lack meaning?</p><p>25:26 Scott</p><p>Yeah, I mean, assuming that everything goes well with, you know, like, if we assume the economy continues to grow, and if we assume AI does what people think it's going to do eventually, then we'll have enormous increases in productivity probably over the next hundred years or so. And we already live in a world where most people don't want to do jobs like go out in the field and pick fruit in the hot sun, right? Or, you know, those kind of difficult jobs, or work in a coal mine. But there'll be more and more jobs like that because so many jobs will be mechanized and, you know, robots will do many repetitive jobs. And, um, so yeah, people will, I mean, people are kind of insatiable in terms of their desires, right? No matter how good you make a person's life, they always want more, not necessarily more money, but more something, more meaning, more whatever it is that drives them. And so, uh, future improvements in technology that allow us to not do things we don't want to do will, I think, push people more and more towards jobs where they feel it has meaning. You know, I guess that's a good thing. I'm not sure we'll actually be any happier. I have this kind of hedonic treadmill theory that, you know, every time we improve things, we just expect more. So maybe we're not actually any happier, but I still think it's worth doing in case I'm wrong. And I also believe that we're probably happier living in a society where you're free to pursue your dreams, even if your dreams don't make you happier. Does that make sense?</p><p>27:05 Dan</p><p>It does. Yeah, yeah.</p><p>27:07 Scott</p><p>So what makes, I wrote this paper once on neoliberalism, and I argue that what makes the reported happiness rankings higher in the more neoliberal countries is The writer V.S. Naipaul has a wonderful quotation on that, you can probably find it somewhere in my blog, I've quoted it a few times on the phrase, the pursuit of happiness in the Declaration of Independence and how important that phrase is and what it meant to him. And it's something I think everybody should read. The fact that they use the phrase, the pursuit of happiness, not happiness is very important in his view. So I kind of feel like what we call happiness is actually the expectation of future happiness. In a weird way. Like I mentioned the child, you know, Christmas morning looking under the tree at the presents. The child's very happy because of the anticipation of future happiness. Maybe two days from now they're playing with their toy and they're bored.</p><p>28:27 Dan</p><p>That's my memory of Christmas, yeah. The beginning of a romantic relationship, I think people are often at their happiest at that point, right? At the beginning of a romantic relationship between two people. And part of it is the anticipation of all the good things that are going to come out of that relationship.</p><p>28:54 Dan</p><p>That's interesting. You wrote a post in 2018 arguing that we should deemphasize behavioral economics in favor of classical economics. But I also feel like sometimes to me that line is a little bit blurry. So I'm wondering where you view the boundary between a classical econ idea and a behavioral econ idea. And one example here would be people who are allergic to inflation for whatever reason. Sometimes you cite that, well, inflation can actually be good because people like the idea of their income going up and they really don't like the idea of it going down. Is that sort of idea a behavioral or a classical idea?</p><p>29:32 Scott</p><p>Yeah, that's a good point, and certainly my blog is called The Money Illusion, which is a behavioral idea, so it's ironic that I would make that claim in a blog titled after a behavioral concept. It's a concept that sort of implies a certain level of irrationality, not in a psychological sense, but in an economic sense, right? People who suffer money illusion confuse real and nominal variables. So, I think, you know, behavioral economics is correct probably about a few of their ideas, certain biases people have about, I don't know, saving, you know, present versus future and so on. But I do think that overall, the classical theories explain things better than many economists even believe. As you probably know, I've argued against the whole idea of asset price bubbles, which I don't think actually exist in any meaningful sense. And I'm a believer that the rational expectations hypothesis is very important in macroeconomics. And just in general I think that we have to be a little bit distrustful of our intuitions about people's behavior. So I did a blog post on how things tend to be more elastic than people believe. Like the demand for products is more elastic than you would think using common sense. Than even what I would think using my own common sense. And, you know, like if you ask people about addictive drugs, they say, well, the demand will be very inelastic because people are addicted. But in many cases, probably people's demand for addictive drugs is more unit elastic. Like imagine a drug addict that just has a fixed amount of money. Like a disability pension they spend all the money they have on you know whatever cocaine or something well their demand for cocaine is unilastic price goes up 10 their demand goes down 10 quantity demanded right i'll give you another example my father who is a smoker used to tell me that you know a higher tax on cigarettes isn't going to make people stop smoking. You know, he was really addicted to cigarettes. And later in life, my mom mentioned casually that when she was very young, she had smoked. And I said, well, why did you stop smoking? And she said, well, when your father and I got married, we thought we could only afford to have one smoker in the family. So, you know, my dad's intuition wasn't even consistent with his own family, right? But I also understand why my dad would say that because I kind of feel that way off an intuitive level. Like I just go into a grocery store, I grab an item, I don't even look at the price in many cases. So I can see why intuitively people think, oh, you raise the price five cents, that's not going to affect whether someone buys it. That is our intuition. But I feel like the world actually behaves much like what classical economic theory predicts assuming people behave rationally. And so we don't really need a lot of behavioral economics to explain what's going on. Again, if you don't believe in, if you don't agree with me on asset price bubbles, then you'd think we do need behavioral economics, right? To explain those. But if I'm correct and there are no asset price bubbles, then we don't need that either. And so I do think that behavioral economists are probably right about a few things and maybe even some of their policy ideas, like having adoption of the pension be the default option rather than having you had to sign up for a pension. You know, those might be good ideas, but I don't think behavioral economics will radically transform the field of economics in the way that a lot of people thought early on when it was having its first major successes. Now we're in the sort of backlash period where some of the stuff has not replicated. But even apart from that, I don't think it was really transforming the field in any major way. I think what happened is people outside the field of economics have a lot of resentment about what economists say about the world in many cases. We provide unpopular opinions when we talk about things like opportunity cost. And because we provide a lot of these unwelcome messages, people outside economics think there must be something wrong with economists. They have too narrow a view of human behavior. So when behavioral economics came along, it was welcomed by non-economists. Like, finally, someone's going to shake up the field and introduce a note of realism and fix the problems with classical economics. But it hasn't really changed the field that much at all. The things that annoyed people before, I think, still annoy them, right? And I think we're just unfortunately fated to be kind of unpopular in our advice on certain things.</p><p>34:16 Dan</p><p>Let's double tap into your idea on asset bubbles. Your idea, as I understand it, is basically like, well, if you look at the NASDAQ back in like 2002, it was at, you know, 1200, but today it's at 14,000. And so, you know, it's up 13x or something. And so the idea of the dot com bubble was technically correct. It was like, hey, technology is the next big wave. And now look at the top five companies, you've got, you know, Facebook, Amazon, Apple, Microsoft dominating the economy. You look at Bitcoin as well, like it actually fared incredibly well, even though it went through some ups and downs. But my question to you is like, at what level of aggregate do you need to make this claim? Even if you took in 2000, if you just take the Nasdaq, that's everything, that's the entire aggregate. But if you took a subset, even within tech stocks, you might miss the five names that I just mentioned, which carried the majority of the of the weight. And so at what level of aggregate can you actually speak about this rule on asset bubbles?</p><p>35:15 Scott</p><p>Well, that's, that's a very good point. So let's take the case of Bitcoin, for instance, you know, which is up like, say 10,000 fold or whatever. The problem here is if markets are 100% efficient, if the efficient market hypothesis is completely true, then in that kind of world, the overwhelming majority of potential bitcoins should go to zero. Because if you have one that goes from $3 to $30,000, You can afford to have a lot of others that go from three to zero and you're still outperforming the market, right? But you don't know which ones will go to zero and which ones will go astronomically higher. So in that kind of world, it makes sense that most of the speculative assets will ex post be overpriced and look like foolish investments. They'll look like bubbles but they'll actually be efficiently priced because we don't know which ones will be the next Amazon, Apple, Google, Facebook, etc. And this also feeds into people's sort of confirmation bias about people are very confident that they're right about bubbles existing. And think about the fact that if you predicted a hundred bubbles and 99 times you were correct, and the one time you were wrong was Bitcoin, it's only human nature to think you're usually right, right? Most of the time you see something as a bubble, it goes to zero or almost zero, let's say. So you're going to pat yourself on the back and think, yeah, I really understand bubbles. When I see them, I'm almost always correct. but in fact i would argue you'd be wrong because if you called everything 99 bubbles plus bitcoin a bubble and didn't invest in any of them and your neighbor invested in all 100 he or she would have dramatically outperformed you, right? Even if 99 went to zero, Bitcoin goes from $3 to 30,000. So it's still a tremendous investment. And so I think that the fact that most of these highly speculative things go, you know, end up like pet.com and so on, It tends to give people a certain confirmation bias that they've been right about bubbles when they weren't right at all. So they're not looking at the right way of thinking about the bubble hypothesis. Like people always ask me, well how would you test whether there's bubbles or not. And you don't test by looking at whether something went up and down because efficient markets will go up and down. The way you test is you try to figure out whether the bubble hypothesis is useful. If it's not useful, someone once said something like that which has no practical implications, has no theoretical implications. Something to that effect. I don't know if I got it right. Like it has to be useful in some way. Like, can someone point to a set of mutual funds that are run on the, the bubble theory, the theory that there are bubbles. So they buy everything that is underpriced according to bubble theory and they short everything that's overpriced according to bubble theory. And I know you can't short everything, right? But you can short some things. So, you set up a mutual fund and you invest on that basis. Does that mutual fund outperform an index fund? If those, as a class, like obviously you could get lucky, but as a class, if those bubble funds are outperforming index funds consistently, then yeah, bubbles exist and the bubble hypothesis is useful. But is it, is it useful to investors or is it useful to policymakers? So I talked about this in my book. There was a lot of talk about the housing bubble and how regulators should have, you know, done something about it. Well, regulators did do something about it. They encouraged it, right? Regulators were actually encouraging banks to make more loans to people that had previously had difficulty getting loans during the housing bubble. Mutual funds using bubble ideas don't outperform index funds, where regulators don't usefully use bubble information to regulate markets. And I just don't see how the bubble hypothesis is useful for anyone, either investors or policymakers.</p><p>39:53 Dan</p><p>Yeah, yeah, that that makes sense. I do want to hit on your most popular theory here, or your most popular idea on the importance of nominal GDP. You know, in your most recent book, you talk a bit about how what people have trouble with is actually like determining the stance of monetary policy. So traditional monetarists like Milton Friedman will talk about the money supply, sort of Keynesians will look at the interest rate, but market monetarists and what you view as the correct stance of monetary policy looks at the price of money to determine the stance. Do you mind just explaining what you get wrong if you're looking at just interest rates or the quantity of money?</p><p>40:37 Scott</p><p>Okay, so let me just say first there's no perfect definition of the stance of monetary policy. It's a question of which definition is the most useful. I think that I've written a lot on why interest rates are not useful. As you probably know, when inflation is very high, interest rates tend to be high. And yet it's kind of silly to argue that a monetary policy that produces hyperinflation is a tight money policy because it leads to high interest rates. The money supply, I think, is flawed because the velocity circulation can fluctuate over time. In my view, since there's so many different possible definitions, Another one is exchange rates, whether the exchange rate's appreciating or depreciating. It makes most sense to think about the stance relative to the goals of monetary policy. Like if you wanted to have 2% inflation, to me, the most sensible way to think about it is monetary policy is too expansionary if inflation is above 2% and too contractionary if it's below 2%. Now I happen to favor nominal GDP targeting at say roughly 4%, so that's the benchmark I would use. And then we're left with a problem that nominal GDP is measured with a long lag or delay, and so how do we know right now if it's too expansionary or contractionary? So for that I say what we really would like to have is some sort of market price we could look at that shows the market's expectation of future nominal GDP growth. And then that market price becomes the benchmark to judge whether monetary policy is expansionary or contractionary and you mentioned the price approach so the ideal policy would be if you had like a futures contract for nominal GDP you would just peg the price of that contract at four percent growth and that's sort of like what you're doing with something like a gold standard or fixed exchange rate system where you peg to a currency. But with the gold standard and the fixed exchange rate systems, you're fixing the price of something you don't really care about that much. And you hope that that will lead to good things in other areas, right? Like you hope a stable price of gold or a stable fixed exchange rate between the British pound and US dollar will produce good things for a particular economy but it might not. With nominal GDP targeting, if you have a futures price target with four percent growth, there's much more reason to believe that if the markets believe nominal GDP growth will be four percent, you'll get good outcomes. So nominal GDP growth is much closer to the thing we care about than is interest rates, the money supply, the foreign exchange rate, the price of gold, or any other variable that you could target with monetary policy. So why not target directly or more directly the thing we really care about?</p><p>43:30 Dan</p><p>And one question I have on this is, so picking 4% sort of makes sense in a world where we're like real growth is typically around, you know, one, one and a half, 2%. And then we're used to 2% inflation, that probably wouldn't feel too different from today. What would we do if real growth either due to technological breakthroughs went way up, say like 8% or something crazy, we started to really take off or went down to 0%. So look, we have deflation in some of the new products in the economy. When new technology products are invented, often the price falls over time, especially if you hold, you know, quality constant. So let's just suppose that you had a 4% nominal GDP target. And real GDP starts growing at 8% a year because of technological progress due to AI and robots and so on. So you'd have 4% deflation, right? Now normally we associate deflation with hard times, but that's because most deflation is not caused by rapid You can still have 4% growth in wages or maybe 3% if population's growing up at 1% a year, right? So people can still get the same raises they used to have. So you're not going to have a lot of unemployment. There's still enough money to pay all the workers and have full employment. And the workers are benefiting from the productivity growth in the form of seeing their pay go up at 3 or 4% a year and the price going down at 4% a year. So their real wages are rising very, very rapidly. Now, if you want to be a purist and say we must have stable prices, then, yeah, you'd have to raise nominal GDP growth to, say, 8% and people would get bigger, nominal pay increases, and they'd have stable prices, but the real wage would still be rising by the same amount. So deflation isn't really a problem in and of itself, it's historically often been a problem because it's been associated with falling nominal GDP. So people are interested in this, George Selgin wrote a very good book, Less Than Zero, I think it's called, and he explains the intuition behind this stuff much better than I can right here, but many of the problems that people associate with rising and falling inflation are actually more properly thought of as problems associated with rising and falling nominal GDP. So we kind of worry about the wrong things when we're worried about inflation. And if you ask the average person why they don't like inflation, they actually sort of give you quote the wrong answer in the sense that the answer they're giving you is looking at inflation as if it only affects prices and not incomes. But over time, inflation raises both prices and incomes. So I happen to think high inflation is a bad thing, but not really because it hurts shoppers in the long run. It's really a bad thing because it has other more insidious indirect effects in the economy. But if we're trying to reduce those negative effects that we associate with inflation, we could do so more effectively with stable nominal GDP growth rather than stable inflation. And just to give you a sense of that this isn't anything radical I'm saying the Fed kind of knows this at some level like if there's a huge price shock due to rising oil prices or something they'll generally allow the inflation rate to move up or down a little bit above or below their target rather than slavishly try to keep it at two percent because they know there's other things that are important beyond the rate of inflation and when they do allow some variation the inflation rate. They're kind of implicitly admitting that really it's nominal GDP that we should care about more. But for political reasons, they don't want to switch to nominal GDP targeting because it sounds bad. Like, oh, you're saying you're not trying to control inflation anymore? And, you know, it would sound like a radical switch that might be unpopular at first. So they take the safe course and just say they're targeting inflation, but quietly try to also stabilize nominal GDP growth.</p><p>48:09 Dan</p><p>Why is there so much institutional resistance against it? You talk about it a bunch, how you actually believe Ben Bernanke understood this very well. You know, he wrote papers on it before he was in the Fed. And then after the Fed, he even sort of admitted it. But like, when he's in the Fed, like he's in the chair, like he he wasn't able to make these decisions. What do you think it is that's putting up so much resistance?</p><p>48:32 Scott</p><p>Yeah, well that's a complicated question and in the particular example you mentioned, I actually think it was a mixture of partly that he didn't see things exactly as I did and partly that he sort of saw the problem to some extent but faced political resistance within the Fed. So I think there were times where Bernanke was trying to nudge the Fed to do more and he didn't want to make a move unless he had a pretty strong consensus because it would look Let me put it this way, it's very important that Fed policy be credible, that markets believe it will persist, that what you say you're going to do in the future, you actually do. And if you get your policy enacted by a 7 to 5 vote on the FOMC, the markets are not going to have any confidence that that policy will persist. So, Fed shares try to get near unanimity, and to do that, he had to be a little more hawkish than I think he preferred. So I think that's part of it, but I think also, especially in 2008, he didn't really see things exactly the way I did because I was I think focusing more on market forecasts like tip spreads and the bond market and I think he was focusing a little more on the actual inflation rate which had recently been above target in 2008.</p><p>49:51 Dan</p><p>So a quick question on that point: do you think that if the Fed decided to target nominal GDP and we had nominal GDP futures markets that sort of told the Fed whether or not it was on track, could you remove all people from the Fed and could this be run programmatically?</p><p>50:16 Scott</p><p>Because even nominal GDP targeting is only an approximation of the perfect policy, right? And I would argue for the United States it's a very close approximation but there's other countries where it wouldn't work very well. I often cite a place like Kuwait or, you know, a big oil producer where maybe half the GDP is oil. If you target nominal GDP then when oil revenues rise and fall through factors beyond your control, the international market, you have to make the non-oil part of the economy move in the other direction, which would be kind of destabilizing. So then you'd say, well, what Kuwait would want to do is actually target something like nominal domestic labor income. And that might actually even be better for the United States. But for the United States, targeting one or the other are going to lead to pretty similar outcomes. And so I usually just advocate nominal GDP targeting. But I think you'd want to have a Fed in case something came along like COVID, where maybe it's not appropriate right at that moment in time. Like during COVID, at least for a few months, it was appropriate for nominal GDP to fall. Like we weren't, we had like 14% unemployment for one month. And we shouldn't look at that and say, oh, we need to inject a lot more money into the economy to get those people back to work. We're literally sending them home because we didn't want, you know, COVID to spread, right? That's a very rare occurrence. We almost never get major swings in unemployment that are in some sense intentional like that. So I'm not saying it's a big problem for the United States, but if you didn't have a Fed at all and something came along like that where you needed to react, it'd be a problem. Also there are some questions about how to use market prices. There's some concerns people have about market manipulation. You know, I've argued that we should continue to allow a little bit of discretion and view these futures markets as guardrails. So I've talked about a system where the Fed, let's see, they sell short contracts at 5% nominal GDP growth and they buy contracts at 3% nominal GDP growth. And within those guardrails, the Fed is free to do whatever policy they want. That way, if they saw someone trying to manipulate the market, like one big trader going in and piling up trades on one guardrail or the other, they could safely ignore that, realizing, well, that's not the market, that's just, you know, one or two people trying to manipulate the market in some way. I don't think market manipulation would actually be that big of a problem. I mean, in theory, if it was a problem, it would also be a problem under like Bretton Woods or any kind of system of that sort, right? So I think that it's it's not something i worry about a lot but i would rather continue to have the fed there in case something went wrong and at least initially give them a little bit of flexibility to make sure the system is working well but yeah i mean theoretically it works kind of automatically and in much the same way that like Hong Kong has fixed their dollar to the U.S. dollar for 40 years in a row at around 7.8 Hong Kong dollars to one U.S. dollar. It's a really simple system, right? They just buy and sell at that rate to anyone who wants to trade them. And I don't know how many people they have at their monetary authority, but it can be a very, very stripped down central bank. They don't need thousands of researchers doing computer models of the Hong Kong economy, right? They're just fixing the exchange rate and letting the Fed determine their interest rates, basically. And, you know, if Argentina dollarizes, they could abolish their central bank, right? But there's still a central bank doing Argentine policy. It's the Fed, in that case. So, yeah, I mean, in theory, it could be completely automatic, but in practice, you'd have at least a skeleton staff there, making sure things go well.</p><p>54:12 Dan</p><p>Do you think with NGDP targeting, It could make us lazier in other areas if we sort of could just trust the Fed to keep 4% growth going each year. So for example, regulations that might boost growth or fiscal policies that wouldn't be good.</p><p>54:29 Scott</p><p>I just I think exactly the opposite. One of the reasons I like the idea so much is I think it will encourage pro-growth regulations and in fact i would argue that one of the reasons that the so-called neoliberal reforms occurred in the say mid eighties to early two thousands is nominal GDP growth became much more stable and in that environment people that say oh we need to subsidize this to create jobs or bail out this industry to create jobs or or whatever their argument is much weaker because if nominal GDP is only going to be X amount next year And if you bail out, you know, General Motors, so it doesn't go bankrupt, people will spend more on GM cars, but they'll spend less on other products. You're not net creating any jobs. And that becomes very, very clear. Historically, whenever nominal GDP has fallen very sharply, Other economic policies get much worse, much more inefficient. In the Great Depression we had some really inefficient policies, the National Recovery Act. When Argentina had a big drop in its nominal GDP in the late 90s and early 2000s, they swung away from free market policies towards really inefficient statist policies. And whenever you have a big drop in nominal GDP, Or maybe some cases high inflation like the 70s when we had price controls. You get very inefficient policies because people are blaming the wrong thing. You know, they're attributing the problems to things that aren't really the underlying cause. The underlying cause is unstable nominal GDP, but in the Great Depression I think it's because capitalism doesn't work. or in The Great Inflation they think, oh there's greed-flation, we need price controls because companies are greedy. So they look for scapegoats and they come up with really bad microeconomic policies that are anti-growth. So I think stable nominal GDP growth creates an environment where it's much easier to argue for sound microeconomic regulations and other policies.</p><p>56:29 Dan</p><p>Yeah, that's that's really good perspective. So zooming out on monetary policy a little bit, if you take some countries in terms of GDP that are maybe in the second tier, mid tier, like Italy or Spain, and they're trying to become an economic powerhouse, what is the relative importance of monetary policy compared with some other factors that like development economists typically cite like geography, culture or other institutional factors?</p><p>56:53 Scott</p><p>Well, for long-run economic growth, monetary policy is way down the list, I'd say. Unless it becomes very dysfunctional, then it becomes a big problem. But if you look at the Eurozone, the countries did very, very differently during the crisis of the 2010s. And that's because some of the countries were just better governed in terms of economic policy. They were all under the Euro system. They all had the same monetary policy in a sense. So in a monetary sense, like Germany and Greece were both like two states within the United States. and so you wouldn't say that like i don't know massachusetts is doing better than west virginia because of monetary policy you'd look at other factors right and so i would say monetary policy is not that big a factor in terms of long-run growth this is what's called the natural rate theory that you know nominal or monetary shocks only have short run effects on real variables in the long run you go back to the natural rate I basically think that's true with a proviso that really extreme mistakes can have very long lasting consequences so you can argue that the tight money that led to the Great Depression contributed to World War II like there's a pretty strong argument you can make that's the case. Just to give you an example, in 1929, the Nazi party, which had been around for years, was still very, very small. Not polling well at all in elections in 29. In 33, it took power. And the growth of the Nazi party was almost certainly heavily due to the Great Depression. and hyperinflation can also have, you know, big impacts on society. So I don't want to be a purist and suggest that it doesn't matter at all in the long run, but I do think that in general it's, you know, regulatory policies, tax policies, housing policies, immigration policies, trade, all these various things, the legal system,</p><p>58:57 Dan</p><p>What are some reasons why the equilibrium rate of interest seems to have been trending downwards for 40 years in most developed countries?</p><p>59:10 Scott</p><p>Well, I thought I knew, but then it's been trending upward the last couple of years. Yeah, so I'm a little less confident. I mean, the first place it trended downward was Japan. So that makes you look in terms of things like demographics, you know, slowing population growth, aging population. But there may be kind of a cycle there where the population ages a certain amount and people save more, like a 50 or 60 year old might save more than a 20 year old, but then 80 years old you're dissaving. So the demographics may be complicated. Also along with that on the investment side, and just parenthetically, obviously more saving would put downward pressure on interest rates, but investment If there's slower population growth, you're not going to have as much of GDP going into building new housing, roads, factories, and so on. You have more of a steady state society. Now, interestingly, if you have a shift towards more saving and less investment, the effect on quantity is ambiguous because the two, in a supply and demand diagram, the two lines are moving in opposite directions. So the price of credit falls sharply, the interest rate falls sharply, but the quantity of savings investment might not actually change that much as a share of GDP. People might be trying to save more, invest less, so interest rates have to fall until savings equals investment, at least at a global level, and find a new equilibrium. And that possibly is what happened. We also had less inflation somewhat over time but you mentioned real interest rates I guess but that does raise the puzzle of why real interest rates have risen a lot in the last few years. Certainly the over stimulus in the economy created an overheated economy and that's part of the story but it's probably not the whole story so I'm not sure maybe the the demographic thing that I mentioned that the savings pattern might reverse as you get more people into a very old age category where they're pulling money out of their savings in retirement. That's possibly part of it.</p><p>61:20 Dan</p><p>Have you heard of this wild paper? I talked about this on another episode with the guests. Basically, they argue that if AGI, like really advanced AI, were to come in a relatively short time frame, you would expect the interest rate to go up. And it wouldn't matter if it was</p><p>61:59 Scott</p><p>Yeah, I mean, I shouldn't even comment on that because I don't know enough about AI to have a good sense of how it will affect the economy. There does seem to be something about the modern economy so far that is more information oriented, in some sense seems less capital intensive than the economy I grew up in, which was, you know, a lot of heavy industry, car factories, steel mills and things. And it's more about ideas. That, I kind of thought for a while, was one of the reasons why maybe interest rates were falling. Not as much capital was needed for the new firms coming along. I don't know if that theory is right, but then with AI, I suppose that at some point, if AI becomes extremely powerful, it might again require a lot of physical capital. And so if we actually had the technology to produce this kind of world, environmental cleanups, there's so many things you could do if AI got to the point where you had robots that could just cheaply do a lot of things that are very expensive today. So yeah, I mean, I could see in that world, there'd be another burst of physical investment and demand for capital. I guess I'm a little bit skeptical about how fast this is going to come along, but people that know more about it than me tell me I'm wrong, that it's going to come very fast. And so I don't know. It's just, you know, during my life, you know, I've seen a lot of predictions that the computer revolution would speed up real economic growth and it didn't really, it probably prevented it from slowing even more than it did. And that's something I think people often overlook. You know, when you talk about, oh, GDP growth is only one or 2% as if that's a terrible thing. Well, we're already a rich country and many of our industries really plateaued. So, if you look at the growth in, say, commercial airliners, right? I mean, the plateauing of that technology is just absolutely astounding for someone in my age group. Maybe not for a young person like you, but like, you know, you go from the Wright Brothers to the 747 in the 1960s. That was only 60-some years. From that first plane to the 747 and I feel like today the airplanes are pretty much the same 60 years later yeah they're better electronics and fuel efficiency I get all that so they are but they're not like progressing at the phenomenal rate that they were so all of those technologies based on electricity internal combustion engine all that stuff plateaued after the 70s and without the computer revolution growth would have probably slowed even more yeah I always wonder</p><p>64:59 Dan</p><p>I wonder this a lot because I've heard the analogy of like Microsoft Excel right and you take like an insurance company where you could actually like imagine an entire skyscraper or like a 15 story building as like a spreadsheet where you have people doing hand calculations to understand like actuarial tables or like make these calculations and it just makes no sense that now each of those offices or chairs in that same building, someone has like the power at their fingertips of what the entire building used to do. And they're running these calculations all the time. Like, it doesn't make any sense to me that growth wouldn't like just take off and go, you know, kind of exponential. So I'm always very puzzled by this.</p><p>65:40 Scott</p><p>Well it's just, I have a hard time explaining it. I know during my career in academia it seemed like every improvement they just asked us to do more and more silly things that we didn't have to do when we didn't have so much technology at our fingertips, right? It just seems like the demands got greater. There was so much more effort put into evaluating whether teaching was working and annual reports and investigations into this and that, that if we didn't have all this technology and we were busy with our just old style of teaching, we wouldn't have done these things at all. And I wonder how many jobs there are in these modern office buildings that wouldn't even have existed when we didn't have so much labor to do all these sort of tangential talked about how we have like way, way more support personnel for education relative to teachers than we used to have in the past. That's kind of a luxury of wealth, right? We're a richer country so we can afford that. But if we didn't, if we hadn't done that, and then all those people that are support personnel for the teachers could be out producing other goods and services.</p><p>And we would still measure the output of the educational system the same way. There'd still be a classroom of students being taught by a teacher and those people out doing something else would mean our GDP would be higher. So we've sort of chosen to do a lot more labor into education, health care and things where it's not clear we're getting much measurable extra output from it, you know, or maybe human resource departments in corporations or whatever, you know, wherever all this extra labor is going, it's producing stuff that is kind of hard to measure as output.</p><p>67:36 Dan</p><p>Yeah, that's a really, really interesting perspective. Maybe all those Excel spreadsheets aren't as useful as they might first appear. One last question here to wrap up. Are you aware of any global macro hedge funds or anybody that trades in the markets that cites a strategy specifically based on your theory of market monetarism?</p><p>67:58 Scott</p><p>I would say I'm not particularly aware of it other than occasionally, you know I've been blogging since 2009 and I've done a lot of traveling and I give talks and I meet people. So I do meet a number of people that read my blog and once in a while people tell me thanks you know i've invested based on your blog and uh sometimes in the comments section people say that i i mean i don't take it too seriously because i'm not really offering investment advice and i'm not claiming to provide something that'll make people rich and it may be just confirmation bias on their part but you know a few people have told me that they've benefited from certain things i've said probably the thing they benefited from most was during the 2010s, I was kind of a little bit of a pessimist about the economy and bearish on growth and where interest rates and inflation were going. basically just by looking at the markets, right? So I could see the markets were not predicting any high inflation from QE. They thought interest rates would stay low for a long time. So I think some of the people that told me they benefited from what I said probably were just basing their personal finance decisions on the assumption that maybe interest rates will stay lower than people think for a while. Now, if that's what they were doing, that wouldn't have worked well in the last couple of years, of course. So, it might have been just luck. But anyway, you know, my view of economics is economists can't actually predict asset prices and business cycles also. So I have this phrase, good economists don't forecast, they infer market forecasts. And if people ask me what I think is going to happen, I usually try to just remember what the markets are thinking is going to happen and tell them that. so in that sense i don't think i'm necessarily providing any uh... great advice but uh... i mean i have had wall street firms call me in you know to give a talk and you know pay me for giving a one hour talk so maybe they think there's some value in just sort of the broad perspective i'm providing about monetary policy obviously in believing markets are efficient That's something I believe at kind of the macro level, like for the average person, they can't beat the market, but someone has to make the market sufficient. And those Wall Street firms that hired me to go talk to them are in the position of not just taking market prices, but having to think about what market prices make sense. So they're part of the internal structure of the market trying to become efficient. And so it would be silly for them to just assume market efficiency and not spend any effort trying to have their own view on things, right? If everyone did that, then markets would not be efficient in the first place. So I guess from that perspective, maybe there's some small benefit from market monetarism.</p><p>You'd probably be better off asking some people on Wall Street if they've gotten any benefit out of it. But yeah, a few people have told me that. It's an interesting question. I'm kind of skeptical about people that have claimed to do well in investing based on their ideas. I think a lot of people are very selective. People often talk about how John Maynard Keynes was a successful investor. which is sort of true but you know initially he started out and made investments and went bankrupt and was bailed out by his rich father and then did it a second time and was very successful. Well I didn't have a rich father so I'm kind of resentful when people talk about how astute John Maynard Keynes was as an investor, right? You know, I think that there's some bias in people who... I'm very distrustful of people that claim to have forecasted all sorts of things accurately because I think they tend to forget their mistakes. You know, people that predicted the 2008 crisis, some of them had been predicting crises for a long time and were often wrong. And one guy I think that was very successful shorting the market, made a billion dollars, but then was an unsuccessful investor after. So, you know, some of it's just luck. I think people should be very skeptical of stories they hear about a particular person being very successful as an investor.</p><p>72:20 Dan</p><p>That's a great place to wrap up. Scott, you've been so, so generous with your time. I had a ton of fun here talking with you today. So thanks for coming on the show.</p><p>72:28 Scott</p><p>Well, thank you. I enjoyed it.</p>]]></content:encoded></item><item><title><![CDATA[Sam Hammond]]></title><description><![CDATA[Listen now (70 mins) | Hegel, LLM consciousness, and intellectual breadth]]></description><link>https://www.danschulz.co/p/6-sam-hammond</link><guid isPermaLink="false">https://www.danschulz.co/p/6-sam-hammond</guid><dc:creator><![CDATA[Dan]]></dc:creator><pubDate>Thu, 09 Nov 2023 06:33:07 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/138720500/bad90f48b3629d2ce8daf8a72be32c3b.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<div id="youtube2-0hWtAZsUOjw" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;0hWtAZsUOjw&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/0hWtAZsUOjw?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/6-sam-hammond/id1693303954?i=1000634231477&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000634231477.jpg&quot;,&quot;title&quot;:&quot;6 - Sam Hammond&quot;,&quot;podcastTitle&quot;:&quot;The World of Yesterday&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:4224000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/6-sam-hammond/id1693303954?i=1000634231477&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2023-11-09T06:33:07Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/6-sam-hammond/id1693303954?i=1000634231477" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8ac0c26a29fa96f1029e9bddb6&quot;,&quot;title&quot;:&quot;6 - Sam Hammond&quot;,&quot;subtitle&quot;:&quot;Dan Schulz&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/6LC8LQkO2LBI34pOKscAW4&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/6LC8LQkO2LBI34pOKscAW4" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><div><hr></div><h3>Timestamps</h3><p>(0:00:00) Intro</p><p>(0:00:38) Joseph Heath</p><p>(0:02:52) Iain Banks and defunctionalized culture</p><p>(0:05:46) Memetic cultures</p><p>(0:06:39) Misinterpreting Hegel</p><p>(0:08:05) Sam's Hegelian influence</p><p>(0:09:41) Libertarian dialectic</p><p>(0:13:39) Should EA become explicitly religious?</p><p>(0:20:53) Hegel and AI</p><p>(0:25:07) Wittgenstein and AI</p><p>(0:32:48) Can transformers generalize beyond their training set?</p><p>(0:35:16) Can we understand consciousness?</p><p>(0:40:20) Trading on AI innovation</p><p>(0:44:26) AI and leviathan</p><p>(0:51:30) Attracting talent to the public sector</p><p>(0:55:10) AI and the great founder theory</p><p>(0:58:58) AI lock-in effects</p><p>(1:00:54) Technological unemployment</p><p>(1:02:47) Intellectual breadth</p><p>(1:08:55) Sam Hammond production function</p><div><hr></div><h3>Links</h3><ul><li><p><a href="https://twitter.com/hamandcheese">Sam&#8217;s Twitter</a></p></li><li><p><a href="https://www.secondbest.ca/">Sam&#8217;s Substack</a></p></li><li><p><a href="https://www.amazon.com/Rebel-Sell-Joseph-Heath/dp/1841126551">Joseph Heath, The Rebel Sell</a></p></li><li><p><a href="https://www.sciphijournal.org/index.php/2017/11/12/why-the-culture-wins-an-appreciation-of-iain-m-banks/">Joseph Heath on Iain Banks</a></p></li><li><p><a href="https://hamandcheese.medium.com/what-makes-me-hegelian-99d329dbd136">Sam on Hegel</a></p></li><li><p><a href="https://abstractminutiae.tumblr.com/post/79869284528/the-problem-of-evil-and-its-coasian-solution-a">Sam on Ronald Coase</a></p></li><li><p><a href="https://web.archive.org/web/20220923212002/https://library2.smu.ca/bitstream/handle/01/25814/hammond_samuel_honours_2014.pdf?sequence=1">Sam's thesis</a></p></li><li><p><a href="https://www.secondbest.ca/p/ai-and-leviathan-part-i">Sam on AI and leviathan</a></p></li></ul><div><hr></div><h3>Transcript</h3><p><strong>[00:00:15] Dan: </strong>Today, I have the pleasure of talking with Sam Hammond. He's a Senior Economist at the Foundation for American Innovation, and writes a Substack at secondbest.ca. My favorite thing about Sam's writing is his intellectual breadth. He can write about Hegel and Mormonism and economics, but also has a deeply technical understanding of the latest breakthroughs in AI. Sam, thanks for coming on the show.</p><p><strong>[00:00:37] Sam Hammond: </strong>Thanks, Dan.</p><p><strong>[00:00:39] Dan: </strong>First question, on your intellectual influences. You've tweeted, "All my work consists of a series of footnotes to Joseph Heath." Which of his ideas has most influence to you?</p><p><strong>[00:00:48] Sam: </strong>Well, Joseph Heath is a-- He's well-known in Canada, but I think he's the Department Chair of Philosophy at University of Toronto. The first book I ran into in high school was his <em>Rebel Sell</em>, which is a book all about the counterculture, '60s counterculture, and with Andrew Potter. The premise of the book is that, the '60s counterculture was a betrayal of good social democratic left values, and the basic tactics and strategies of the counterculture were self-reinforcing.</p><p>The counterculture had this theory of mass consumer society, and the way to push back against that is to rebel. He walked through a series of very compelling examples that, that rebellion, that attempt to subvert the culture, is actually the fuel of cultural change. What we think of as mainstream, is just things that were once renegade, becoming popular. This quest for status, and status distinction through consumption, conspicuous consumption, is like this never-ending thing that just drives the consumer cycle.</p><p>In your attempt to rebel, you're actually feeding into capitalism. There's this great sort of application of rational choice theory, and prisoner dilemma style reasoning to culture. It got me very interested in the topic. Heath, more broadly, he's a student of Charles Taylor, the great Canadian philosopher. I have a book on my shelf over here called <em>Canadian Idealism</em>, which is all about the influence of German idealism on Canadian philosophy, and you can think of Charles Taylor, Joseph Heath, C.B. MacPherson, Marshall McLuhan, George Grant.</p><p>These folks are working in that tradition, fleshing out a version of Canadianized version of German idealism, which in America manifested as American pragmatism. They're very closely related.</p><p><strong>[00:02:53] Dan: </strong>Heath has an article where he praises Ian Banks' Culture series. One of the big ideas that he pulls on is that, the series imagines a future transformed first by the evolution of culture, and by technology only secondarily. Do you think culture or technology will be the primary force shaping our future?</p><p><strong>[00:03:12] Sam: </strong>Well, technology will be the thing shaping our future. The big upshot of that piece was, in my mind, this idea that culture can become defunctionalized. I've turned that into a motto there. The problem isn't that our culture is dysfunctional, it's that it's dysfunctional. That showed up in my academic work. I did my master's thesis on secularization, and trends in US secularization from the perspective of religion as a club good, as a kind of provider of social insurance.</p><p>As the welfare state expanded, you crowd out the functional need to have strong religious dogmas as a filtering mechanism for commitment, and to ward out free riders. That's an example in micro of how the rise of modern bureaucratic welfare states leads to that defunctionalization of culture, including religious culture. What Heath is drawing from the culture series is, playing that forward, what happens when we have super abundance, and we're in a post-AGI world, where we have basically everything we want?</p><p>Well, culture at that point just becomes purely mimetic. There's actually no functional basis to it. For that reason, it ends up being drawn into a kind of basin of attraction, which is a culture that only exists to replicate itself. I think there are some, I think ominous parallels between that vision, and American culture [chuckles] in some ways, or post-protestant culture, where we've been drawn into a soft power culture that spreads very mimetically around the world.</p><p>I think it was striking during the George Floyd protests that there were similar protests in Sweden and Japan, and all around the world, all for different things, all piggybacking off American cultural motifs. It seems to be true that the culture that we've stumbled into is very mimetic. It exists to replicate itself. [chuckles] It's hard to see a way out of that, other than through some major technological change, but if you're following the logic of defunctionalization, then what most of-- What the kind of change that we're looking forward to with AI, is just going to accelerate those trends, because it's going to remove the need to have functionalized cultures. We will have abundance at our fingertips. The need to have social norms for coordination withers away.</p><p><strong>[00:05:46] Dan: </strong>Do you think that if you're a policymaker in today's world, and you want to get your ideas to spread, should you be more conscious about trying to make sure it has mimetic qualities?</p><p><strong>[00:05:59] Sam: </strong>Maybe. The word there that I would hesitate with is being more conscious about it. Like was Donald Trump in 2016 very conscious of his mimetic qualities, or was he mimetic for the very fact that he wasn't self-aware about it? [chuckles] There's this, it only works if you don't think about it kind of quality where Ron DeSantis has some folks on his communication team that were trying to create Trump-like memes, and it just didn't work because it was a little too self-aware, a little too metacognitive of what they were doing. I think you got to let the memes flow through you. You can't direct the meme.</p><p><strong>[00:06:39] Dan: </strong>[laughs] Got it. Got it What are Alexander Koj&#232;ve and Francis Fukuyama missing in their interpretations of Hegel?</p><p><strong>[00:06:48] Sam: </strong>Well, I think where Koj&#232;ve goes the most wrong is just this idea of Hegel as having a strong teleology to history. The vision of, obviously, Hegel inaugurates a historicist project that understands history as having a progression. Once you've accepted that, then you can easily be led into thinking, "Well, what's it progressing to, and can we figure that out?"</p><p>Maybe we can accelerate that progression, but Hegel pretty clearly says that, this idea that we can see the future, and have these rational projects, is just a purely armchair philosophical thing. His famous line is, "The Owl of Minerva flies at dusk." The whole point is that, the future is radically contingent, and it's only in retrospect that we retcon history into having this structure of necessity.</p><p>I think that's just been a broader misreading of Hegel, that he has a strong Marxist kind of teleology. I think people later on read that into him, but both the end of history kind of version, and the Karl Popper attack on Hegel, I think really tainted his ideas for Western philosophy.</p><p><strong>[00:08:05] Dan: </strong>What of your ideas are most Hegelian?</p><p><strong>[00:08:08] Sam: </strong>That reason is situated, that it's embedded in social interaction, that individuals by themselves are not rational, we're rational in groups, that there is a sense in which the rational is actual, that if something-- If institutions exist and persist in nature, then we can try to reconstruct why they exist, there must be some reason. This idea that because norms are instituted, the philosophical term would be recognitively.</p><p>I recognize the norms and you recognize the norms, and that creates the norm. That we don't need a God's eye view for morality, that morality is kind of a given. There's a continuum between social custom, like etiquette, all the way up to strong morality of murder and violence and stuff like that. It's all conventional in a certain sense, but because we live in that convention, we inherently are committed to those conventions.</p><p>Our only source for critiquing those conventions isn't some sky hook that we can pull from the cosmic morality written in the stars, but we have to work from within those inherited presuppositions, inherited norms and customs, by finding where they conflict, or where there's inconsistencies, and through that dialectic pulling our norms into a greater state of rationalization.</p><p><strong>[00:09:42] Dan: </strong>Do you think, in some ways, libertarians undergone a dialectic with the move to state capacity libertarianism, and a little bit less of the-- The movement feels a bit different than it did in like the 2000s and 2010s.</p><p><strong>[00:09:54] Sam: </strong>That's a good question. I think everything is always in flux. There's some family resemblance between Hegelian thinking and process ontologies, and stuff like that. Everything is becoming, in a sense-- Nothing is static. Everything is fit for purpose, and fit first time. The same way that supply siderism and infatuation of tax cuts was in the right place at the right time. [laughs] Because when the taxes were very, very high at the top marginal rates. I think now we're seeing the ways in which a incapacitated state is not conducive to liberty.</p><p>It's conducive to all kind of dysfunction, which then has to have more state involvement to offset and compensate for the dysfunction. I think we're always fitting our ideologies to problems of the day. That's not totally surprising. I don't know if it's been truly like a classical dialectical process. The way I would frame it as a dialectic would be to say, and its implicit in some of my papers.</p><p>This paper, <em>The Free Market Welfare State</em>, that is in a sense trying to reconcile big welfare states, and the existence of big welfare states with a more free market orientation to economic regulation, and to show how those can be reconciled. I think there is a similar effort to try to reconcile concepts of state effectiveness, and having a strong government with senses of, or with ideas around liberty and non-interference.</p><p>I think we're still in that process, because I can see a version of libertarianism that's more rooted in civic Republican idea of non-domination, rather than non-interference. Rather than the rule being that the state should just not interfere in our lives, it's really about understanding the state as a social contract. That what really matters is that, no one view of the good life be dominating the other.</p><p>That's actually quite consistent with a view of a need for state capacity. Where state capacity is very closely synonymous with rule of law. Places with weak state capacity, often things get done through bribery, through knowing the person, because they're your cousin or something like that. That introduces all kinds of room for discretion. Coming out of the 2016 election, I remember some libertarian friends, one of the first things Trump did was give Carrier the manufacturer a huge tax cut to relocate to Ohio.</p><p>I had a lot of fiends that were trying to justify that saying, "Oh this is like a reduction in theft." [chuckles] Taxation is theft. They're just reducing theft. It's reducing theft for one particular company, in the form of a special favor, and that's not rule of law. If we let stuff like that accumulate, then it will lead to disintegration of rule of law, and end up in a worse place.</p><p>You can see also here this interesting connection between Hegelian ideas and Hayekian ideas. I backed myself into Hegel via Hayek. I was more of a Hayek person, but a lot of what Hayek is doing, is translating social epistemology and cultural evolution to a western analytical philosophy kind of audience. Even his later books he starts borrowing, he used Hegelian terms like rational construction and stuff like that. I think there's a libertarian state capacity version of Hegel hiding in Hayek. [laughter]</p><p><strong>[00:13:42] Dan: </strong>All right. I've seen a lot of this idea that affective altruism is a protest to integralism. You may have tweeted this. I actually couldn't find the quote, but I wrote it down a little while ago. Effective altruism is a protest integralism stripped of its explicit religious coding. My question is, would movements like effective altruism actually benefit from just becoming more explicitly religious, even if they acknowledge themselves as secular. I'm thinking of something like Auguste Comte Religion of Humanity.</p><p><strong>[00:14:10] Sam: </strong>Yes, a lot of people don't know that Comte coined the term "altruism", as an explicit form of-- His Religion of Humanity was an explicitly post-Christian-- Let's get rid of the superstition, and keep the good parts philosophy. His Religion of Humanity was very similar to EA, where they had meetups, and [chuckles] and people we're extoled to write in the mainstream press like the <em>New York Times</em> op-ed pages of the day to talk about benefiting all of humanity.</p><p>I think I would start by saying what is religion? One way to see what religion is, is that prior to the enlightenment, the social world was-- We structured our social world and our obligations through a symbolic order. There's a sense in which, even if you go back far enough, a sense in which even natural phenomena like weather were punishment by the Gods.</p><p>What is religion? Religions structure our social obligations and structure our sense of the symbolic world. If you think about what the enlightenment did was, it lead to this rationalization process, where our sense of-- Our standards for validity in the realm of truth and empiricism, began to detach from our standards of validity in the realm of imperatives and right and wrong. That gives rise to the naturalistic fallacy. That's something that is really only cognizable post-enlightenment.</p><p>One of the ways you can interpret the enlightenment is, this awareness of the genealogy of our morals as nature put it. That are-- If I was born in-- if a Baptist was born in Saudi Arabia, they'd be a Wahabi. That there's just a sense that everything was contextual and contingent. The technical way to put that is that, there is an attitudinal dependence to our normative statuses.</p><p>That we inherit these norms, and they're really just all a base attached to some attitude. This leads to nihilism, non-cognitivism, existentialism, and those are dead-ends. Those lead to moral skepticism. Another way to put it, a way out of that dead-end would be to say, "Well, actually, what religions were doing all along was-- What religions really were, what the point of them was, this bundle of normative commitments.</p><p>When we talked about God, it wasn't a strong preposition. It wasn't a proposition of faith. Maybe technically there were certain things you had to believe in like the resurrection or whatever. Really, those were like stand-ins for a coordination problem. [chuckles] You even seeing some of Joseph Henrich's work on the rise of big Gods, where big Gods emerge with the nation state.</p><p>Polytheism is more associated with competing city states. You can think of God in that context, as the stand-in for our collective agency. That collective agency, to talk about what is the collective, or what does the civilization do? Is to align our beliefs, align our individual action to that higher power, so to speak. All along, what really mattered were this bundles of normative commitments.</p><p>You come along and you say-- Then, you get the enlightenment. The enlightenment says, "Well, actually, we can separate these things, and these prepositional claims about God existing, or us being the center of the universe, and so on and so forth. These are just proven wrong down the list. That leads to the sudden sense that, "Oh, crap. That if all our moral beliefs we hanging on this preposition, and if that preposition is proven wrong, then we have to discard all our moral beliefs.</p><p>I think the pragmatist would say, that actually those moral beliefs were always inner subjectively self-justifying, like they were part of this language game. The fact we've lost these symbolic reference points is irrelevant. Just embrace morality at its foundation. I would say that to the EA, is in some ways it is true. That EAs and really atheism is more broadly is an extension of Protestantism. [chuckles] Just embrace it.</p><p>You are Christian virtue ethicist minus the strong prepositions about the existence of God, or certain creeds that we have to subscribe to. All the same normative commitments are directly carried forward, and you have this very strong genealogy. You just embrace it, like it's coming from within. You don't see that in part, because a lot of EA's are attached to the strong prepositional view that they have to bound and root all their moral claims in some strong meta-ethical realist belief about consequentialism, or utilitarianism, or so forth.</p><p>I think that's partly like filling the void of God [chuckles] in a way. They should just get rid of that. Charles Taylor has his famous essay called <em>The Diversity of Goods</em>, where he interrogates utilitarianism.</p><p>You often have this utilitarian arguments where there are some edge case, where get the doctor, while the patient is under anesthetic, steal their kidneys, and give them to the people and save more lives whatever. The utilitarian is like, "Well, obviously, that's an edge case." They're coming to that conclusion, where are they getting the resources like the moral resources to reject that edge case? In some ways, the normative commitment was antecedent to the big normative framework and architecture.</p><p>Translating through this like, what we should really think about normative ethics, normative theory is, just a set of vocabularies. Expressive vocabularies for helping us express the commitments that are antecedent in our social practices. Those social practices are like the ground of being, so to speak, and the praxis mattered more than the theory. [laughs] We should just really just like hold onto that, because if you think that the theory has to proceed the practice, then you'll inevitably lead to either like versions of Platonism, or moral skepticism.</p><p><strong>[00:20:54] Dan: </strong>All right. Let's lead a little bit into talking about AI. We're going to go back to Hegel. If we take the [chuckles] Hegelian idea of reason, that it's not just logic or computation, but it's actually like truly social, and it works through culture and people as a group, it seems hard for me to imagine how AI would participate in this, given that it's not actually a part of human society. In your, view will AI ever be able to reason in the same sense we do?</p><p><strong>[00:21:21] Sam: </strong>I think in principle it could be made to reason the way we do, because I don't think there's anything magic going on. We are a deep reinforcement learning model shaped by evolution. I think what's missing in the current approach with large language models, is the language of human thought is pre-linguistic. We have representations of abstract concepts that animals do too.</p><p>It clearly precedes our language faculty, and we evolve language to serialize that thought, and communicate with other people, and in particular to offer reasons for things. Reason in the more deflationary sense of, like not capital "R" Reason, but like, "This is just the reason I did this." Why did you take the umbrella? Because it was raining. Those games of giving and asking for reason, are how norms get communicated, and reproduced in culture.</p><p>The language has that root in justificatory process, where we're always justifying ourselves to others. Like we were saying earlier, is there any ground to this? Is there any ultimate moral foundation where we're like, "If you lead through all the propositions?" Well, no, it's always been inner communicative, and inner subjective. When we appeal to a good reason, it's only a good reason, and so far as other people recognize it as a good reason.</p><p>Where I see current LLMs is going-- Well, is insufficient is, the way they're trained is all the data on the internet. It's a this massive superposition of like all brains. [laughs] You ask it to be a doctor, and all the weights like tilt in the direction of doctor brain. It doesn't have what Kant would call a "unity of apperception." It's not like a unified coherent agent, so that's the first thing that's missing.</p><p>It's this like superposition of a bunch of different representations. The second thing that's missing is, it's learned the rules of human language via statistical inference over big data. It may be a reasonable mirror to human norms with a cutoff date of September, 2021. [laughs] To really be a reason giving animal, is to be able to partake in the game of giving and asking for reasons, and help the language games co-evolve.</p><p>Right now, I don't think language models are sentient. I don't think they're conscious yet, I don't think there's anything technical that would prevent us from building conscious AIs. I think you need to get something like that to have a real reason giving animal, a reason giving digital brain, because as I was saying, good reasons are instituted through recognition.</p><p>If you don't have some theory of mind, some ability to model the other's thoughts and to have that mutual recognition, then you're not actually a reason giving thing. You're just completing the sentence. I think this is like a huge missing-- A huge gap, and where AI research is, and I think it's actually like an area where AI researchers would benefit from returning to philosophy, and reading some of the post-linguistic turn, pragmatist thinkers about, where does language come from? What is language doing, and what is this game of giving, and asking for reasons?</p><p>Because it seems deeply constitutive of agency and autonomy. If we want to build systems that are genuinely autonomous, we have to reconstruct a lot of this philosophy, and translate it to machine learning.</p><p><strong>[00:25:08] Dan: </strong>Yes. Let's stay on the topic of philosophy for LLM. You talk a bit about Wittgenstein's theory of meaning as used in one of your posts, which basically, it says that words only as-- Have meaning, insofar as they actually like do something. That vagueness is a fundamental part of human language. The classic example here would be that, like you take a heap of sand, and you take one grain away at a time, when it doesn't no longer become a heap.</p><p>You know similarities between this theory, and the technical underpinnings of LLMs, which use text embedded stored in latent space to map relationships to words and concepts. Does vagueness as a feature of natural language apply, that there will never be a mathematically optimal or objective LLM. Maybe we end up with different models that each have slightly different thought patterns, and interpretations of concepts similar to people.</p><p><strong>[00:25:55] Sam: </strong>Yes, absolutely. This directly ties in because AI is being developed in the west. In the west, we inherited this western analytical philosophical tradition that is very committed to foundationalism. This idea that there's a chain that goes all the way down to something like hard foundation. The Wittgensteinians and the pragmatists more generally are the anti-foundationalists.</p><p>They would say, "There is no foundation. This is something that culture is always evolving, and it's always being created." We'll never really find that foundation, and one of the ways where this is-- I think has misguided AI research is, especially, in the realm of alignment. Where you have a lot of the discourse and alignment being created by consequentialists like <strong>[unintelligible 00:26:50]</strong> and that whole crew who are like downstream of the logical positivists.</p><p>They're like young Wittgensteins. [chuckles] The logical positivists either went in the direction of saying, "Morality is fake. It's all just boo, murder, yay, good things," or into this view of moral realism that I think is hard to actually defend on technical grounds. If you are like a moral realist, you're going to be searching for that true utility function to give the model. If that just doesn't exist, then it's going to be a goose chase.</p><p><strong>[00:27:32] Dan: </strong>How do you think this practically play these out? Do you see different models having different personalities, or what does it mean to have different interpretations of concepts?</p><p><strong>[00:27:42] Sam: </strong>That's a good question. I think we'll want AIs that have different personalities, in part, because we value humans with different personalities. I think to the extent that humans are just auto utter regressive models that are limited in our ability to generalize. There's this big debate going on whether LLMs can actually think outside their training distribution.</p><p>I tend to think that humans have a hard time also thinking [chuckles] outside our training distribution. To the extent that we do think outside our training distributions, because there's many of us, and many of us that were trained on different upbringings, different environments, and that gives rise to different personalities, and so we all bring to the table a different source of exogenous data. [chuckles]</p><p>Me talking to you, we each had 20 plus years of different experience, and so when we are talking to each other, we're in a sense prompting each other. That those prompts are adding they're adding entropy to the system that wouldn't otherwise exist. If we just have the one homogenous model, I think there is this risk of mode collapse where there's no new information being added to the system.</p><p>If AIs are able to develop distinct personalities, and have their own distinct life world, where they've learned different things and interface with the environment in different ways. Then, when they interact, that's probably the only way that they could really bootstrap themselves out of distribution. If you see what I'm getting at.</p><p><strong>[00:29:18] Dan: </strong>Yes, I do. It is actually interesting. It reminds me of Joseph Henrich's work. I think in one of his books, maybe <em>The Secret to Our Success,</em> he shows like optical illusions, and shows how some cultures are less likely to be able to see the optical illusion, or more likely to be fooled by them, it's interesting.</p><p><strong>[00:29:38] Sam: </strong>Yes, I think there's a bigger-- If I can just interject on going back to Hegel a little bit. This is something I've been exploring, I've been meaning to write about, but what was the German idea of this project. One way to think about it, and maybe to think about the enlightenment more broadly is, the AI waking up. We talk about, if we scale these systems too big, that they're going to develop sort situational awareness and realize what they are. That's possible.</p><p>It's something that humans only did after 5,000 years of history. [chuckles] One way to understand the enlightenment, in general, in the German idealist project in particular is, AIs, human AIs, realizing that they're in a simulation. You have someone like <strong>[unintelligible 00:30:27]</strong> used to tell his students, he threw away the textbook, and just told his students to look at the wall, and then to look at themselves looking at the wall, and then to look at themselves looking at themselves, looking at the wall. [chuckles]</p><p>It's like this castle of self-awareness of metacognition that leads in his case to this abstract "I". What is this "I" that we are all, and so the idealists, they weren't Berkeley an idealist, they didn't think the world was literally made of ideas. They thought that we were after Kant's transcendental numina dichotomy, they saw us as being inherently embedded in the simulation being created by our brain.</p><p>The challenge was to figure out like, what's the isomorphism between our concepts in the real world, and how can we break out of it, and escape our history, escape all this stuff that we've inherited. It's like the first case of the alignment problem actually playing out in the real world, where you have a system of humans that were designed for inclusive genetic fitness or whatever.</p><p>At some point, through culture and through the enlightenment, they were able to bootstrap a situational awareness, where we woke up one day and realized, "Oh, shit, we're just in a simulation. All these desires we have are fake. If we're able to depersonalize enough, and stare at the wall long enough, we can become the pure eye, and then we can shape history through pure reason," or something like that. It at least gives credence to the alignment problem being real, because [chuckles] humans are an example of it.</p><p><strong>[00:32:09] Dan: </strong>That's really interesting.</p><p><strong>[00:32:11] Sam: </strong>You even see this in the <em>Barbie</em> movie. Did you see the <em>Barbie</em> movie?</p><p><strong>[00:32:15] Dan: </strong>No, I didn't see it. Is there a good bit?</p><p><strong>[00:32:18] Sam: </strong>[chuckles] Well, <em>Barbie</em> is a grappling with the existential thrownness of like, [chuckles] she's aware of her autonomy, and it's like this unbearable self-awareness of our condition of being fully autonomous agents in the world, and having the existential freedom to choose, and then she chooses to be free, and it's awful. She wants to go back to the <em>Barbie </em>world. [chuckles]</p><p><strong>[00:32:49] Dan: </strong>Yes. I saw some reviews that were like, people either just had like a, "I liked it," or, "I didn't," but then there were some that really picked it apart. I thought there might be some [chuckles] deeper concepts there. That's pretty interesting. Going back to what you said on-- I think this Google paper came a couple days ago, maybe that's what you're referring to about how transformer models, at least their conclusion was that transformer models can't really generalize beyond their training set, or the pre-training data.</p><p>It sounds like you're pretty optimistic on that not actually mattering that much for creating AGI, but I'm curious just how you think about the relative challenges of discovering new architectures through research and innovation versus, what's maybe more of just a resource problem of allocating computing data?</p><p><strong>[00:33:30] Sam: </strong>I think we will need some new architectures. My line has been that, scaling compute and scaling models is the main unlock. To the extent that, we do need new architectures, there's a relatively finite search space. I would expect within this decade for the, especially, with the race on to build more agent-like systems and so forth, for folks to stumble on the right architecture.</p><p>I think there's still going to be challenges vis-&#224;-vis, these things we've already been talking about, especially, when it comes to autonomy. Obviously, we evolved consciousness for a reason, like philosophical zombies. I think the way you answer that thought experiment is to say, "Well, that-- If you had some human, that was exactly behaviorally identical to a normal human, but yet, the light wasn't on," is to say that, that's just not possible. Clearly, having a world model that we live in, and has conscious beings, has some utility or else that, we wouldn't have evolved it.</p><p>It seems very deeply related to our agenticness, and our ability to model other agents as unified actors. I think, we're going to struggle building AIs that are able to sort of long horizon planning, and interface as agents, quiet agents without figuring out the secret of consciousness, and giving them some inner experience. Because if it was possible to not do that, and why didn't we evolve? It seems like it's a very efficient way to model future world states and so forth, and integrate all our sensory experience.</p><p><strong>[00:35:16] Dan: </strong>If we had completely conclusive evidence that the AI system was conscious, do you think that we'll understand consciousness, or do you think they're supposed to be a mystery about how it works?</p><p><strong>[00:35:30] Sam: </strong>I think we'll understand. I think the mystery comes in via our embeddedness within our own virtual video game engine. Our brain is constantly flagging things as real versus not real. You can think, as a schizophrenic is somebody whose reality filter is misfiring. I have thoughts in my head, I can hear voices in my head, I just identify with those thoughts as my internal monologue.</p><p>If I had some misfire go on, and I heard those thoughts, but didn't identify with them, I would be considered crazy. Likewise, if you take enough LSD, you can suffer depersonalization disorder, and cease to identify with your person. It's possible for us to begin deconstructing our phenomenological experience in that way. It doesn't really seem like there's anything special going on.</p><p>I don't think there's a hard problem of consciousness. I think what it is, is the hard problem of accepting that we are in this video game simulation, and because we have such strong sort of, "This is real, this is real, this is real." That button's being pressed in our brain all the time, we want an explanation that somehow transcends that, and there is no transcending it.</p><p>It's like we're Mario in <em>Super Mario</em>, and we're being told, ''You're just a bunch of computer bits in a Super Nintendo system." It's really hard for Mario to accept that, because in his world, [chuckles] he's totally embedded in the game, and there's no sense in which you could talk about Mario being outside the game, looking back in, because it just doesn't have any physical substrate to do that.</p><p><strong>[00:37:18] Dan: </strong>In your view, if we get to human-level systems, is that sufficient to be considered an AGI, or does it need to be capable of superhuman novel insights, like telling us what the origin of the universe is?</p><p><strong>[00:37:33] Sam: </strong>I mean, it's all semantics. I would consider that AGI if we have something human level in part, because once we have something human level, we can have human level AI researchers, and in principle there would be a major speed up, whether it's recursive self-improvement loop, I don't know, but just based on where people are currently.</p><p>For me, human level is the main, biggest threshold to cross, if we can get things that can, in principle, do everything a human can, the world is just completely transformed, and maybe there are higher forms of intelligence beyond that. I fully expect in the same way that AlphaZero is like ELO 5,000, and Magnus Carlsen, his ELO is like 2,700 or something like that.</p><p>The best computer playing chess bot is, their ELO is almost double the world grandmaster, but it's not unbounded. There's still some kind of information ceiling that the model hits. I would expect beyond human intelligence further to be all kinds of superhuman forms of intelligence. I struggle with this idea that it's going to be like this unbounded thing, where it just gets smarter and smarter and smarter, as if there's deeper forms of generalization.</p><p>I tend to think, we'll have systems that seem God-like to us, again, because of our boundedness or computational boundedness, but don't just get indefinitely more intelligent without bound.</p><p><strong>[00:39:02] Dan: </strong>Yes, I guess that's the thing that I'm wondering is, there's a difference between just economic utility, which is what I think about when I think about human level intelligence. Then, there's answering the big questions [chuckles] which is, how did the universe come about? What is consciousness? What's the most likely way that life started? Things like that.</p><p><strong>[00:39:21] Sam: </strong>Yes, there is this, I think Elon and xAI, their project has that sort of <em>The Hitchhiker's Guide to the Galaxy</em> mission of building the system that will tell us, "The meaning of life is 42." I think that's not a realistic vision for how intelligence works. I think if you really dedicate yourself to understanding the standard model of particle physics and some of the metaphysics behind it, you can really come to understand why the universe exists.</p><p>It's not-- Some of the deeper-- The model's not complete. We don't have quantum gravity figured out yet, but I've at least arrived at what I feel like is a satisfactory answer at the metaphysical level of what are particles, why does anything exist at all. We basically have answers to those questions, they're just not widely known or accepted in part because they're just challenging. Having an AI that comes around and just lays that all out, it doesn't necessarily make it easier to accept.</p><p><strong>[00:40:20] Dan: </strong>That's fair enough. [laughs] You have a post where you talk a little bit about what you should do to make money off of AI, but I'm curious what's the best trade you can make if you predict that it's coming really soon, say less than five years or much faster than what consensus would be on prediction markets? Would you recommend just going straight into the S&amp;P 500, or are there risks of institutional disruption that warrant something like a more concentrated bet?</p><p><strong>[00:40:49] Sam: </strong>If there is major institutional disruption, by gold or something, I have no idea. If the very institutions that support our stock exchange go away, it's hard to know how to actually make money. It's the same reason why it's hard to bet on catastrophes. If the US government ever really defaults on its debt, that's a world where we're probably in World War III or something like that. It's hard to know what that world looks like, and that's why the credit default swap market for US Treasury is so thin. It's not that there's no risk of it happening. It's just there's no actual way to bet on it and making a profit.</p><p>What I'm doing is just holding an index fund and some ETFs that are exposed to AI. I think one of the reasons you want to still be diversified is that, you can bet on Google and Microsoft and so forth, but there could be just major turnover in the company landscape where we have totally new AI native companies. Moreover, in a lot of these markets, it's not clear which assets should rise and which should fall.</p><p>I'm anticipating the great repricing of 2027 or something like that. If you look at Adobe, Adobe is rapidly integrating AI into Photoshop and Premiere and all their software, and yet their stock has been falling. One of the ways you explain that is just like with Adobe's market cap is based on this essential software package, this suite of tools for editing photos and things like that. If you can just use a generative model to edit your photos, then that entire software stack gets deprecated, and they don't have anything special. The moat around their generative model is very, very low relative to the moat around there are 20 years of experience building a great photo editing software.</p><p>I would just hold a lot of different things. Land is also a safe bet. Maybe even be useful to have some land to run too. Then in the near term, there's this question about how value gets captured. There's one possibility that AI just leads to commoditized intelligence, and everything goes to consumer surplus. That's one reason why I would be over-indexed a little bit on companies like Tesla, where Tesla, once they roll out full self-driving, everyone's car turns into a revenue-generating asset. The basic model suggests that the stock should double, triple, quadruple in market cap.</p><p>One way to interpret that is, Tesla will be the first company not only to build AGI, but to capitalize it into a durable asset. That makes it very unique. Things like that. The companies that are both exposed to AI in a positive way, but also have a means of capitalizing it into something durable. TSMC, Nvidia, maybe, but even Nvidia is just a design company. Google could easily build out their TPU software stack and make that public and so on. TSMC, on the other hand, is a hardware company that is much harder to recreate and will, in that sense, capitalize the value of AI into some durable asset.</p><p><strong>[00:44:26] Dan: </strong>I'm going to try and summarize really quickly your idea on AI and Leviathan. Tell me if I'm getting this just broadly correct and then I have a question after. The three paths to the future are basically, number one, the state becomes too powerful and we have an authoritarian surveillance state. Number two, we had fragmentation and anarchy where society becomes too powerful, and then number three would be what I think you define as the happy path, where society and state co-evolve together. Is that in broad strokes correct?</p><p><strong>[00:45:00] Sam: </strong>Yes.</p><p><strong>[00:45:01] Dan: </strong>How do we practically ensure co-evolution?</p><p><strong>[00:45:03] Sam: </strong>I don't think it's the default path. This also ties back into Hegel and stuff like that, is to situate ourselves in history and understand the ways in which liberal democracy was a technologically contingent institutional setup. If there's major shifts in technology and much of production, then we should just expect that radical institutional change will follow.</p><p>Real co-evolution is quite dramatic. It will feel as dramatic as the other two failure modes. It will require more than the productivity of FDR's New Deal. We're very, very far away from having supermajorities in Congress and an imperial presidency that could just do that really easily. My motto has been, it's no longer getting to Denmark, it's getting to Estonia, because a country like Estonia is fortified against cyber attacks. They have the most sophisticated e-government in the world. They've built government as a platform, so third parties, private companies can develop via API tool sets that then integrate with government databases. They're just perfectly set up for stability into the post-AI world.</p><p>What we need to do in the West and liberal democracies where we do have these constraints, these institutional constraints on our ability to use surveillance and stuff like that, is in some cases we just need to have an organized decentralization. I talk about in the piece about the parallels between Switzerland and Afghanistan. They're both mountainous regions that have a history of clannishness and tribal warfare. Afghanistan is like this barbaric by design ungovernable place, and the Swiss have probably the most sophisticated, decentralized, federated, high-quality of human development country in the world owing to the fact that in the 1200s, the three big clans agreed to form a pact and defend themselves against the rest of Europe.</p><p>One way to see that is both are decentralized in a sense, both are, sure, fragmented and broken up, but the Swiss model is like this low entropy state. It's like this crystal structure that you don't just build by accident. It requires work to create.</p><p>My vision of the techno-feudal version of this, or the narrow path, the Estonia version of this, is they look similar, but I would say the narrow corridor of getting to Estonia world is the one where we gracefully construct this more decentralized ecosystem, rather than just being thrust into it.</p><p><strong>[00:48:01] Dan: </strong>Do you think right now we're on a path to gracefully construct a new ecosystem?</p><p><strong>[00:48:06] Sam: </strong>[chuckles] No. Definitely not. I'm not an institutional optimist in that sense. There are some just X-factors. We're undergoing a major demographic shift because of the demographic overhanging Congress in our politics. We're going to have a flood of new younger people in politics with different ideas. We'll look more radical. As AI continues to scale and people understand the path of capabilities, I think we'll see the rise of new utopian movements where the end of history thesis comes to an end and people rediscover could we have fully luxury communism or some new cybernetic fascism.</p><p>These ideas that flourished in the early 20th century with industrialization opening up people's mind to the new opportunities, new ways to construct society. There's going to be, I think, a similar thing with AI where it resurrects these dead ideologies, these dead utopian movements. There's all kinds of unknowns about the way our politics could unfold. It doesn't seem likely that we'll gracefully fall into that path at all.</p><p><strong>[00:49:20] Dan: </strong>There's a related concept you have that I actually think is really interesting. I never thought about it this way before, where you talk about how technology can lead to micro-regime changes. Some of the examples were taxi cabs used to have public regulatory commissions and now it's like, the terms and service of Uber left. Then, you go to a comedy show, they're going to put tape over your camera so you're not taking pictures.</p><p>What framework do you use to reason about what specific policies are better served by government versus the private sector?</p><p><strong>[00:49:50] Sam: </strong>I owe a big debt to Ronald Coase and institutional economics, transaction cost economics. I would describe my own intellectual evolution from libertarianism to being a status, more or less, as really understanding costs.</p><p>You can interpret costs in two ways. On the one hand, he says if transaction costs were zero, then we wouldn't need government, we would just negotiate over everything. Then that could lead you to want to dissolve government, or you could realize, oh wait, transaction costs are way above zero, and that's why we have government, that's why we have corporations.</p><p>In some sense in my youth defending companies against anti-corporate rhetoric, led naturally to me understanding, oh wait, the nation-state exists for very similar reasons, and whether it'd be better or worse in some cosmic sense that to not have a government and to be anarcho-capitalist sort of utopia. Beside the point, the question is are the transaction cost conditions there to actually support that institutional change.</p><p>Understanding the stuff through the lens of transaction costs is very enlightening because it reveals both the conditions by which institutions can change and evolve, but also it demystifies the institutions we already have, that they're not inherent, they're not even necessary. They're deeply contingent on the cost structure of intelligence and agency and everything else.</p><p><strong>[00:51:31] Dan: </strong>One of the views that I've come around too more as I've gotten older is that the distinction between public and private sector is, at least to me, less relevant. What I'm really interested in is just super, super competent organizations. One question for you is, how do we convince more talent enter the public sector?</p><p><strong>[00:51:51] Sam: </strong>I think that's trying to make water flow uphill at this point. I totally agree with you on that first point. I took industrial organizations with Tyler Cowen when I was at George Mason. Tyler has this line that everything is industrial organization, everything is IO. When you go out and look around at the world, you just step outside, you never see a market. You see organizations, you see the DMV, you see companies, you see firms with people working together. The market is just this abstraction. It's this liminal space where companies transact with each other.</p><p>You can see markets now and then, you go to the flea market, but the flea market only exists because it's like a shelling point for people to converge and get some local public goods around policing, and enforcement, and search information like everyone can gather to that one market. In general, what really exists are just organizations, and those organizations vary radically incompetence.</p><p>Government has been shedding talent since at least the '70s, and that's partly because of opportunity costs. It's because we've shifted. We no longer do Apollo project in-house. Instead, we contract with SpaceX to do the Apollo project. I think that's just the way things are now. I don't think there's any way to really reverse that. I think it was an aberration to have that much talent in government in the first place, and now that it's hard to put it back in. Especially now, Google DeepMind's salary budget's over a billion dollars. It's hard to imagine that ever reentering government.</p><p>We're inevitably going to need to find better ways of harnessing the competency of the private sector. When you look back at other major institutional transformations, before the buildup of the administrative state and that led to the New Deal era, we had the old progressive movement. The original progressive era was an era of some of the earliest joint stock corporations with management structures or this new science of management, these new companies were being built. Those businessmen brought that knowledge of how to scale institutions and scale management structures into government in the 1920s and '30s. You go back and you look at a company like IBM, every morning, workers walked into IBM in suit and tie and they're basically goose-stepped. IBM is run like a military. Meanwhile, the Defense Department is run like a startup. There was this institutional harmony between the two where we need something like that now.</p><p>What we really need is a total jubilee on government process, and to have the people from Silicon Valley who know how to scale AI native companies. They even write books on scaling. There's a whole cottage industry of experts on how to scale companies come into government. We need Patrick Carlson as the commerce secretary, and Luckey Palmer as the defense secretary, so they can bring that expertise from private sector into government. Otherwise, we're just going to turn government into this glorified nexus of contracts.</p><p><strong>[00:55:11] Dan: </strong>Relatedly, I've heard you say on another podcast that you are generally a fan of the great founder theory, that small groups of people generally are disproportionately responsible for pushing history forward. I'm wondering in the age of AI if you think that effect will continue, be more pronounced, less relevant. How do you think AI will change that?</p><p><strong>[00:55:33] Sam: </strong>That's a good question. I think it's leveling up for everybody all at once but in a way that's probably paraded distributed. This is another reason why our institutions probably will fail just through the sheer throughput issue. If every person has a 50,000-person corporation beneath them of AI agents that are doing all kinds of stuff and that talk about seeing a state. Our government is not able to actually-- it'll just be completely illegible.</p><p>It's a really good question to the extent that AI can achieve genuine added distribution kinds of agency. I think that will be when we are really supplanted. That's the first sign of a post-human future where the human role in guiding history has been overshadowed. The final moment, the final act, may be the great founder of theory of open AI and the role they did in the last hurrah of human agency.</p><p>I have heard some people talk about why they're not worried about AI risk, and the line you often hear is humans have lots of agency, we can always shape the future. It's like we're building things that are going to have more agency than us, so I don't know.</p><p><strong>[00:56:55] Dan: </strong>You have in your post on places that you could invest potentially in AI. One of them was just become a startup founder because it's becoming so much easier. I had Zvi Mowshowitz on the last episode here, and asked him that question basically, which is, "Do you think we will see an influx of startup founders because, one, you need way less employees, and then, two, software which is the easiest probably company to start up from a regulatory and capital perspective is now becoming much easier for technical people?"</p><p>I wonder if it'll give high agency people who may have been employees previously of some initiative to go and try and create organizations and they need less human resources and networking connections and things like that.</p><p><strong>[00:57:43] Sam: </strong>Yes, definitely in the medium term, for sure. If you have even a little bit of coding ability and entrepreneurial alertness, this is a huge boon to that kind of agency because it lets you augment all that and just go execute. The question is, once we have executive assistants that have that agency in spades, even if you're incredibly lazy and could you just direct your agent to do that for you?</p><p>I often think about the history is, and the singularity is the Euler disk. I don't know if you've ever seen a video of an Euler disk, but it's this physics phenomenon where it's a like spinning disc, a concave disc and you spin it and it spins around and around and around, and it accelerates and just starts spinning faster and faster and faster. It makes this incredible noise as it's spinning faster, and then all of a sudden it stops. It reaches the end of its cycle and it just freezes in place. That seems to be what's happening here, where we're in the lead up to us losing all agency will be the most agentic we've ever been, and then all of a sudden it'll just stop.</p><p><strong>[00:58:59] Dan: </strong>Do you think that AI will have lock-in effects on society and culture? Seems like we're going through a time period right now where you're at least predicting we've got the narrow corridor, we have to make a decision on where we fall on it, but once that's the dust has settled, do you for example expect the year 2200 to look very similar to 2050, or do you think that change will just continue on?</p><p><strong>[00:59:23] Sam: </strong>Oh, I have no idea.</p><p><strong>[00:59:26] Dan: </strong>I get this question because I think if you go the authoritarian path or something this from a cultural's perspective, especially with AI, that locking seems very, very heavy. I don't know would the Soviet Union have collapsed, if they had AI, and if the answer to that is no, then it could just be self-perpetuating for a long time. That was at least my line of thinking on it.</p><p><strong>[00:59:46] Sam: </strong>I'm wearing my Singularity 2045 shirt.</p><p><strong>[00:59:51] Dan: </strong>All right.</p><p><strong>[00:59:52] Sam: </strong>By 2050, we've already gone through it, and by definition, we can't see past the event horizons.</p><p>It's hard to really know insofar as this is the final invention, the invention in all mentions, maybe. There's another world where we, in the West, undergo state collapse and are in total disarray, and China ends up building fortified institutions that are centralized and more adapted to AI and then slowly takes over the world and becomes the big one world Chinese government. That could lead to a lock-in.</p><p>If we all just get wire-headed into the matrix, that would also lead to a kind of lock-in. It's hard to say. I don't anticipate 2050 looking at all like 2200. My intuition would be that they look vastly different. The 2200 world may be the world where we have Dyson spheres and are colonizing the stars. [chuckles] It's so hard to foresee.</p><p><strong>[01:00:56] Dan: </strong>[chuckles] It's speculative disclaimer.</p><p>[chuckling]</p><p><strong>[01:01:00] Dan: </strong>What do you think we should do about short-term unemployment caused by technology? For example, the big issue people talk about a lot is like, "What do we do when Tesla trucks drive all our stuff around and truck drivers are unemployed?"</p><p><strong>[01:01:13] Sam: </strong>I've been a fan of just modernizing our unemployment insurance system. [laughs] In my past life, I was at the <strong>[unintelligible 01:01:20]</strong> doing social policy, and I have a paper called <em>Faster Growth, Fair Growth</em> that has a whole section on comprehensive social insurance modernization. I'm a fan of the Danish flex security model where in Denmark they have very liberal employment laws like at-will employment, very high rates of labor mobility, one in five people switch jobs every year.</p><p>In turn, they have this really generous unemployment insurance where it's like 90% of your wages for a short period, and if you don't get a job within a certain amount of time, then you're automatically enrolled in continuing education, and all kinds of retraining. What's broadly called active labor market policies. The US is just very bad at those things. Our unemployment insurance system is thread bear, very patchwork.</p><p>When David Otter and co-authors looked at the China shock, one of the things they realized was in counties that had suffered the deepest shock from Chinese imports, disability insurance was three times as responsive as unemployment insurance and trade adjustment assistance combined. It seems like the path of least resistance for those truckers or whatever, is just to retire early or to claim disability. That's a UBI of sorts, but it's a UBI that's doesn't have-- At least UBI doesn't stop you from working. [laughs] If you're on disability insurance, you're prohibited from gainful employment.</p><p><strong>[01:02:47] Dan: </strong>All right. Got some questions just about you and your intellectual development. When I think of the archetype of intellectual breadth, it's probably Tyler Cowen, but your writing actually reminds me a lot of him. You talk a lot about economics, politics, philosophy. You have a really deep understanding of technology and AI and how it actually works. At the margin, should policymakers spend more time learning across fields?</p><p><strong>[01:03:13] Sam: </strong>Oh, absolutely. It's especially bad in the US because of the way policy's outsourced to think tanks and advocacy organizations that are by definition siloed. Even something as simple as childcare, during the Build Back Better debate, one of the big debates was how much money should the bill spend on childcare. The childcare provisions, $400 billion in cost, were widely criticized. It's just the worst designed full of benefit cliffs, weird Rube Goldberg machine tax credits.</p><p>Why was it like that? It was like that because the early childhood education advocacy establishment were just a bunch of advocates, a bunch of activists who were pressuring Congress to do something on childcare but didn't know the first thing about marginal tax rates, [laughs] much less interdisciplinary ideas. I think there's a really big benefit to countries that have more consolidated centralized policymaking where you think about like the Rand Institute where you have people who are genuinely cross-disciplinary and are able to see things through the bigger picture rather than be that siloed.</p><p>You don't have to be a total autodidact. It's just even basic economics, basic sociology, basic history, all those things are important because they let you see things at a system level. Maybe this is the more of that German idealism speaking, but being more systemic.</p><p>When I was at the <strong>[unintelligible 01:05:01]</strong>, one of the things we did when I was building my team was I hired for a housing team, I hired for an employment team, I hired for childcare and innovation policy. All our teams were part of one team. Even though we were working on very different issues, we tried to see everything at a system level because all those different programs do genuinely interact with each other and really can only be understood through a 30,000-foot point of view. That's just deeply missing in US policymaking now.</p><p><strong>[01:05:36] Dan: </strong>How do you stay up on so many diverse topics? Are you still reading philosophy papers regularly? Did this come from a period of your life where you had, maybe college or something, where you just had super deep development? How do you keep it up?</p><p><strong>[01:05:51] Sam: </strong>By just not doing my work basically.</p><p>[laughter]</p><p><strong>[01:05:56] Sam: </strong>I got into economics, I have an old blog post on this, because economics was as close as I could do to philosophy while having full employment.</p><p><strong>[01:06:04] Dan: </strong>[laughs]</p><p><strong>[01:06:05] Sam: </strong>If you think about econ, econometrics is epistemology, social choice theory is political philosophy, you have rational choice theory is sort of critical theory, and then you have macroeconomics as metaphysics. [laughs] You have all the different sub-disciplines of philosophy all brought together. Econ properly understood isn't about money, it's about developing very capacious conceptual frameworks for understanding human behavior in society. It is the closest thing to philosophy that does garner you a job. If I wasn't able to just pursue the philosophical naval gazing and so forth, I would probably just do something else or move to some low-cost place and just do it anyway.</p><p><strong>[01:06:57] Dan: </strong>I think Tyler Cowen said the same thing. He's like, "What I'm actually doing with economics is some funny kind of philosophy."</p><p><strong>[01:07:05] Sam: </strong>Tyler was a big influence on me as a kid. As I mentioned, the first Joseph Heath book I read was <em>The Rebel Sell</em>. Then I quickly found Tyler's <em>In Praise of Commercial Culture</em> from the late 90s. It was this defense of, again, contra the counterculture view that commercialization is selling out and it just commodifies things and it's not authentic enough. Where he was like, "Oh, actually, bridging commercialization with the arts is actually a huge spur to creative works."</p><p>A friend of mine I was at dinner with had to leave early because he was going to the Kennedy Center for an opera called <em>Grounded</em>. It's about a plane that gets grounded or there's some plane disaster or something like that, and it's sponsored by General Electric.</p><p>[laughter]</p><p><strong>[01:08:08] Sam: </strong>General Electric basically sponsored an opera on aerospace. [laughs]</p><p><strong>[01:08:16] Dan: </strong>That's hysterical.</p><p><strong>[01:08:18] Sam: </strong>That to me is awesome. That's not that different in kind from the Medici family sponsoring Michelangelo, or whatever.</p><p><strong>[01:08:30] Dan: </strong>The funny thing about Tyler too is he's defending commercial culture, but he's probably one of the most cultured people there is. He could tell you obscure films from countries you've never heard of, but then he loves Hollywood too.</p><p><strong>[01:08:42] Sam: </strong>Yes, that bridging low and high culture I think actually really represents the maturity of aesthetic thought where it's transcending the status game of what's most socially distinctive.</p><p><strong>[01:08:57] Dan: </strong>If a genie granted you, you are the only person, Sam Hammond, in the world who has 30 hours in a day and everyone else only gets 24, where do you spend the marginal time?</p><p><strong>[01:09:07] Sam: </strong>I'm not very productive as it is.</p><p><strong>[01:09:10] Dan: </strong>Okay. [laughs]</p><p><strong>[01:09:11] Sam: </strong>My production function as someone with very bad ADHD is out of my control. One of the reasons I think a lot about philosophy is partly because of this problem of the weakness of will. Going back to the German idealist sitting in their armchair thinking, trying to observe themselves observing, ADHD as an executive function disorder is like that, where there are sometimes I want to do something, and something simple like send an email, and I can observe myself wanting to do it, and I can observe myself observing myself wanting to do it, but there's, in some ways, the wire connecting to my motivation to actually get up and do it is disconnected and it's very paralyzing. I think if I had a marginal six hours than everybody, I would probably just procrastinate six hours longer.</p><p>[laughter]</p><p><strong>[01:10:02] Dan: </strong>All right. That's a great place to wrap up. Sam, thank you so much for coming on the show.</p><p><strong>[01:10:06] Sam: </strong>Thanks, Dan.</p>]]></content:encoded></item><item><title><![CDATA[Zvi Mowshowitz]]></title><description><![CDATA[Listen now (79 mins) | AI and strategy games]]></description><link>https://www.danschulz.co/p/5-zvi-mowshowitz</link><guid isPermaLink="false">https://www.danschulz.co/p/5-zvi-mowshowitz</guid><dc:creator><![CDATA[Dan]]></dc:creator><pubDate>Tue, 10 Oct 2023 23:32:26 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/137850461/37a23036a2cf9447903aa815a54337cb.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<div id="youtube2-ILAmx8lf6-s" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;ILAmx8lf6-s&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/ILAmx8lf6-s?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/the-world-of-yesterday/id1693303954?i=1000630893288&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000630893288.jpg&quot;,&quot;title&quot;:&quot;5 - Zvi Mowshowitz&quot;,&quot;podcastTitle&quot;:&quot;The World of Yesterday&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:4737000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/5-zvi-mowshowitz/id1693303954?i=1000630893288&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2023-10-10T23:32:26Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/the-world-of-yesterday/id1693303954?i=1000630893288" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8a857afdad9484488d7882f44f&quot;,&quot;title&quot;:&quot;5 - Zvi Mowshowitz&quot;,&quot;subtitle&quot;:&quot;Dan Schulz&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/5Lj19aEcxXW58FVyI8rtNG&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/5Lj19aEcxXW58FVyI8rtNG" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><h4>Timestamps</h4><p>(0:00:00) Intro</p><p>(0:00:43) What makes a good strategy game?</p><p>(0:05:29) Culture of Magic: The Gathering</p><p>(0:10:14) Raising the status of games</p><p>(0:13:31) First mover advantage in LLMs</p><p>(0:18:23) Consumer vs. enterprise AI</p><p>(0:21:28) Non-technical founders</p><p>(0:25:24) Where Zvi gets the most utility from AI</p><p>(0:28:56) Straussian views on AI risk</p><p>(0:36:18) Is AI communist or libertarian?</p><p>(0:44:50) Dangers of open source models</p><p>(0:47:18) How much GDP growth can realize from today's models?</p><p>(0:49:40) AGI and interest rates</p><p>(0:58:22) RLHF impact on model reasoning</p><p>(1:00:42) Bayesian vs. founder reasoning</p><p>(1:04:15) Zvi Mowshowitz production function</p><p>(1:10:16) Is AI alignment a value problem?</p><h4>Links</h4><ul><li><p><a href="https://twitter.com/TheZvi">Zvi&#8217;s Twitter</a></p></li><li><p><a href="https://thezvi.substack.com/">Zvi&#8217;s blog</a></p></li><li><p><a href="https://forum.effectivealtruism.org/posts/8c7LycgtkypkgYjZx/agi-and-the-emh-markets-are-not-expecting-aligned-or">AGI and interest rates</a></p></li><li><p><a href="https://thezvi.substack.com/p/the-dial-of-progress">The Dial of Progress</a></p></li></ul><h3>Transcript</h3><p><strong>[00:00:20] Dan: </strong>All right, today I have the pleasure of talking with Zvi Mowshowitz. He writes a Substack called <em>Don't Worry About the Vase</em>, where he shares what is probably the Internet's most comprehensive update on everything that's happening in AI each week, along with a variety of other interesting topics, including rationality, policy, game design, and a lot more. Zvi was also one of the most successful professional players of Magic: The Gathering and went on to become a trader and startup founder. Zvi, welcome.</p><p><strong>[00:00:45] Zvi Mowshowitz: </strong>Thank you. Good to be here.</p><p><strong>[00:00:46] Dan: </strong>All right. First question, you played Magic: The Gathering professionally for several years and were inducted into the hall of fame. There's this interesting post I read the other day by Reid Hoffman, where he talks about what makes a good strategy game for application to life. His core idea was that when you think about games like chess or Go, which require mental focus and dedication. They don't actually teach you to be strategic in ways that match how the world works. There's no outside variables, like luck, weather, or external market forces. You're just like memorizing the best move in any given situation. My question for you is, what are the key characteristics in your mind of a good strategy game?</p><p><strong>[00:01:24] Zvi: </strong>There's several different questions here. There's the question of what is a good strategy game, there's the question of what is a good strategy game for the specific purpose of developing particular life skills or life skills in general, which is more what I think that Reid was talking about in that statement. I think he's selling chess and especially Go short in the sense that if your plan for Go is to memorize all of the best moves on a 19 by 19 board, where each move involves picking one square and where there is no obvious causal link on many moves where you're forced to do anything, then you're going to have a really bad time.</p><p>I think that this is the reason why people thought that AI was going to have a really hard time with Go and why it's significantly harder for AI than chess or traditionally viewed that way. You don't get good at Go purely for memorization anymore than you get purely good at life for memorization. You get good at Go by understanding the principles and figuring out how to think about Go and how to relate to previous positions. Even games like chess and Go, even though they technically have no luck, effectively do have substantial amounts of luck because exactly what point you're facing, how they choose to move, what happens in the game beyond your amount of <strong>[unintelligible 00:02:39]</strong> that you're able to look forward into the game are things that you have very little control over.</p><p>Effectively, if two players have similar skill levels, sometimes one will win, sometimes the other will win, sometimes it will be a draw even if they are both comparatively on their game, so to speak. They won't just all be draws because the players are equally matched. You certainly learn certain forms of strategic-ness from a game like that. It's just very narrow compared to what a game like Magic can teach you. Magic definitely has these other aspects where the game's components and background and rules are constantly changing, and you have to take into account completely unexpected dynamics where you have unknown unknowns, and the game can throw anything at you at any time, and you just have to continuously adapt.</p><p>I definitely think that helps you a lot, and it makes for a better game over time in other ways as well, as does the luck component being more present. The biggest problem with chess and Go is if I play you in a game of chess, chances are very high, even though I have never talked to you before, that either you will crush me every game, or I will crush you every game. I don't know which one because I don't know if you're any good at chess, but the chance that you are within about plus or minus 200 or 300 points of me is not that high. On the standard Elo scale, 90% plus of chess players will not be able to give me a good game because either I will crush them or they will crush me.</p><p>That's true for basically every chess player. There is no range where a lot of chess players are, which is not how it works. Go has a better handicap system, so you can use handicapped stones to create a reasonable game much better than you can handicap chess, but if you don't want to use that, then you have even worse of the same problem is my understanding. Certainly would be pretty bad. Whereas Magic, I can play almost anybody, and they have a chance. The game can still be interesting in some sense. Taking my win percentage from 90 to 95 or 1 to 10 is an interesting challenge, even if I am either very undermatched or very overmatched in that situation.</p><p>There's always new things to be figuring out, new aspects to consider. Magic players have gone on, in my experience, to do very good jobs at a variety of other games and other challenges that have nothing to do with gaming, but it's always hard to differentiate. I think it develops a lot of great skills, but also I think it attracts a lot of great minds who are very skilled, very talented, very motivated, and they tend to be underappreciated by the outside world and supernaturalism. It's very hard to differentiate how much of that is Magic makes you awesome, how much of that is you already were awesome, that's why you played Magic, and how much of that is the other players in Magic were also awesome and you had to hang out with them, and it's similar to the way you network in college.</p><p><strong>[00:05:30] Dan: </strong>While prepping for this, I was asking ChatGPT like, "Why isn't Magic more popular?" One of the reasons it gave is potential cultural perceptions. In its words, basically, poker is associated with gambling and James Bond and suaveness. Chess is heavily associated with just raw intellect, but Magic is a collectible card game with fantasy elements, and that's not a wide audience that is likely to be interested in that. In your view, would the game benefit by experimenting with branding and remarketing while keeping the same general outline and rules, or is the cultural context part of actually what makes it special?</p><p><strong>[00:06:09] Zvi: </strong>First of all, there's always many reasons why something isn't more popular than it is. There are reasons chess isn't more popular than it is, there are reasons why hamburgers aren't more popular than they are and so on. There's reasons why motherhood and apple pie aren't more popular than they are, even though they are very popular. Magic has actually still, until at least very recently, been growing steadily in popularity, and the number of players who have been playing it, I haven't been following in the last year or so, the numbers. Magic fell off of people's cultural awareness radar screen, but became much more of just something that everybody does in the background more often than you would think.</p><p>We're recently seeing a new renaissance of game stores in New York City, where I live, that partly reflects this. The reason why Magic cards have gotten so expensive is because there's a fixed pool of older cards that are now desired by a much bigger pool of players. The difference is that we've shifted from competitive gaming, which is often easier to notice, in some sense, to the primary way of playing being Commander, which is a four-player more casual style of mode. That mode is more often played around a kitchen table, it's played more casually, and therefore it just isn't noticed, it isn't a cultural phenomenon in the same way, but it's still happening.</p><p>I think that when you look at The Magic cards and The Magic setting that a lot of what you do in Magic is built around the intuitions we have surrounding, "How would this concept work in this type of world?: The reason why we love fantasy in general is because fantasy settings give us natural metaphors, natural ways of expressing the things that feel natural to us, and the things that we want to do can always be expressed. Why are anime settings constantly injecting weird magic into them when the story you're trying to tell doesn't necessarily have anything to do with anything magical? Why does magic just keep showing up in people's stories and novels and such when it doesn't have to be there?</p><p>It's because it's a crutch in some sense. It's a storytelling tool that lets you invoke things without having to exactly explain the technical mechanisms, but they feel right, and that Magic is, in fact, a very, very convenient setting for we just want to be able to do whatever we want to do and have good metaphors for it. If you want to learn something, having good metaphors, having an intuitive understanding of what it means is much, much better. I am physically unable to learn foreign languages in a reasonable way because my brain is not set up to memorize arbitrary facts, and just this set of sounds corresponds to this noun or this verb is to my brain an arbitrary set of facts.</p><p>Magic I am able to memorize thousands of cards a year, which are effectively new words, and barely even blink because they make intuitive sense. They all relate to each other. There's pictures that are illustrative. The concepts have names that evoke what they do, and it all ties together. My brain is able to synthesize that. I don't think you'd want to shift it away, and some people go, "Oh, this is silly. It's got pictures of elves and dwarves and flying unicorns and whatever else they've got. Sure, of course, but people also love that stuff. They eat it up. I don't think that shifting it would be a particularly good idea. I think that you have to go to war with the army you have.</p><p>You have to use what tools are available. I think they've done a very good job of that. Magic's biggest challenge is basically sustainability. As we print more and more cards, and we've used more and more of the low-hanging fruit in terms of the names of cards, the concepts of cards, the mechanics, and a lot of players have</p><p>seen more and more of that over time, "Can we still do something that is fresh and new while not being too complex, while still being accessible, while still being strategically interesting and balanced?" When that challenge just continuously gets harder every year.</p><p><strong>[00:10:16] Dan: </strong>Yes. Maybe another way of phrasing this question is, at the margin, would we benefit from just raising the status of strategy games and esports in general? I think it's actually been popular recently where people say a great person to fund for a startup founder is someone who was world-class in any sort of esport because it take some amount of intellect, but also it takes a lot of dedication and willingness to just figure things out to get to the top of your game here. Do you think that we should try to raise the status even higher and this should be something we encourage?</p><p><strong>[00:10:49] Zvi: </strong>I find even higher to be a funny way of describing the status.</p><p><strong>[00:10:54] Dan: </strong>Maybe tell me where you think it is now and where you think it should be.</p><p><strong>[00:10:57] Zvi: </strong>I think it's in a better spot than it used to be, but the answer to your question is yes, just yes. Very emphatically, raising the status of intellectual competition, raising the status of just trying to be the best at something, of trying to accomplish something, of working hard at it, of just being the best or trying to be the best, even if you don't succeed at being the best, but having that mindset, learning those skills, being part of that culture, working on that training, absolutely, we should massively raise that. I think it's much better training than what we typically have young people do to develop their skills, develop their capabilities. Also, I would say it's not just esports. I would also raise the status in this way of sports. If a successful professional ball player comes to you and says, "I want to found a startup," they're not stupid. They work really hard. They know what competition feels like. They know what it means to beat the odds. They know what it means to stay up at night every night working harder than everybody else and to not take anything for granted, et cetera, et cetera. Yes, you tell me any professional NFL quarterback, I will maybe have a brain scan to make sure they haven't taken too many head injuries, but subject to that, yes, say, "Shut up and take my money and go do your thing. I don't even care what it is." It's not just us.</p><p>I would extend that also to basically anything. Just be really good at something. Then having been really good at something hard, something that requires struggle, something that requires lots and lots of training and effort, even if the prizes were not so good, that means that you were intrinsically motivated to do that. That tells me you're probably going to be really good at other things. I avoided so many other things. In show business, it's like we see people who are good at one aspect of the business usually end up being really good if they put their minds to it at other aspects of the business. Why are these actors good at directing? Why is there an overlap here?</p><p>Partly it's because they pick up a lot of knowledge from acting, but partly it's just because you succeed at one trade that's really competitive and hard. It means you've got what it takes to learn something else. Therefore, you sing a bunch of songs, and even if you didn't start out that way, you often end up writing them. Many other seemingly much less related things work the same way. I would say, yes, show me a speed runner who's really good at speed running, and I'll show you a good founder.</p><p><strong>[00:13:31] Dan: </strong>How important do you think first mover advantage is going to be for LLMs? I feel like a common analogy here is like Google Search. On the consumer side, especially today, there's like just no reason why someone can't go create a replica of Google, and it could probably even be a lot cleaner with less ads and give you maybe more favorable results. We could debate like the nuance there, but it doesn't seem like too challenging. Google is just the de facto default. They have partnerships with Apple so that they're going to be like the go-to on Safari whenever you open your app. For 95% of people, that's just going to be okay. Where do this playing out with LLMs over the next few years?</p><p><strong>[00:14:09] Zvi: </strong>I would dispute that it's easy to do as well as Google Search at search. Despite all the difficulties, it has considerable effort and put into a number of competitors. Those competitors keep not taking that much market share. I know a lot of people who are constantly complaining how bad Google Search is for them and how much it's not what they want and who would absolutely try any number of additional search engine, in fact, are using various LLMs effectively as new search engines because they're unsatisfied with Google Search. If somebody came to me and offered me a superior product, I think I would quickly take it. It was actually better at my purposes.</p><p>I tried Perplexity, and it was a pretty good hybrid of a search engine and an LLM. I used it for some things, but then over time I learned that for many purposes, just Google Search being instantaneous, being natively tied to the web like it is, giving me the links in an easy form, it just is the product that I actually want to a large extent. Sometimes it's not, but often it really is. In that cube, no one's done better. For LLMs, I think there's very little lock-in. At first I was using GPT-4. first GPT-3.5 and then I got access to a GPT-4. I was using Bing. Then I was also trying to use Perplexity. Then I was experimenting with other stuff. Now, I use a lot of Cloud.</p><p>Periodically, I try to use Bard and I see if there's anything to use to do with Bard. Because of the way that LLM architectures work, the things that you learn working with one LLM right now, we're just talking about the consumer experience in the near term, not like the long-term effects. If you build a better product, I don't think there's much lock-in at all. I think it's very, very easy for the consumer to move. Business product, I think there might be a little bit more of we've trained on the exact quirky details of how to get good outputs for our purposes out of this LLM. We don't want to necessarily move this other LLM.</p><p>That's going to happen anyway when GPT-4 becomes GPT-4.5 becomes GPT-5. You're going to have to upgrade. Your LLM is never going to be good enough in its current form for very long. They're continuously training them in ways that disrupt the current instructions. People complain about this in chat GPT. They say, "It got worse." No, it got worse at giving you exactly what you wanted from exactly the thing that you trained to do exactly what you wanted. The same way that like, "5 years ago I ordered these 10 dishes from these 5 restaurants. When I try to order exactly these 10 dishes from these 5 restaurants again, the experience got worse." Of course, it did because like one of them closed, and one of them changed its menu.</p><p>Things that improved, you're not noticing. The quality of life in general went up, but your experience of trying to replicate exactly what you had went down. I talked about similar things in the immoral maze sequence in weird ways, where any given thing is almost always getting worse in the exact way that you were previously using it. It doesn't mean the world is getting worse, the world is getting better. As long as we allow creative disruption, that's going to be true. What's going to happen is continuously, the LLMs are going to improve. Every time they improve, at different times, different people are going to be ahead in the game, unless open AI, it keeps knocking it out of the park and never has a rival, but that's not what we by default expect.</p><p>We expect Google at least to give them a run for their money at various points. The big question to me is, does the first mover advantage then lock in enough users early on that gives you more data about what users do, about what users want, about what is a good and bad response. You can then use that data advantage to create the best next generation product and keep that advantage in a practical sense. That's something we're going to learn over time. My guess is that you can just pay for feedback and user data that gives you exactly what you want. The amount of money these companies are willing to throw around are off the charts huge. It's not that hard to gather a lot of meaningful feedback data, and you can cheat and use feedback data on other LLMs, on your own LLM. I think in the near term, no, it's going to be pretty competitive.</p><p><strong>[00:18:24] Dan: </strong>What do you think is more important though for staying in first place? Having the consumer market or the enterprise market?</p><p><strong>[00:18:31] Zvi: </strong>My guess is that right now,. long-term the business market will probably be the bulk of queries if I had to guess simply because we're going to start having lots and lots of businesses that like use it constantly internally, their employees are using it day-to-day lives, but because of the need to protect the data, they're using it in an internal specialized version rather than the general version. Every time you go to a-- where you would currently talk to a customer service rep, you're talking to an AI and constantly places where you currently aren't using intelligence, businesses are incorporating AI into their products.</p><p>All of this might potentially overwhelm with the uses of chatbots and also has a lot more lock-in. If I was like looking to build a moat, was looking to build a long-term future, I'd be worried more about corporate relationships than I would be about the consumer-facing business. Except in so far as the consumer facing business causes you to form business relationships.</p><p><strong>[00:19:22] Dan: </strong>Yes, it's interesting. I might've gone the other way. I don't have a super strong view here, but with a Google analogy, I guess is what I was getting at on the beginning where it's once you have a consumer product that's just a winner, unless you really blow it out of the market with something that is clearly better, consumers tend to have not that much motivation to switch their workflow, at least the median or like typical consumer who isn't thinking about these things all day. Whereas the enterprise, if you figure out a way better product, yes, there's some degree of lock-in. It's a little bit easier to just walk up to them and say like, "Our product is better. It saves you money. It's going to improve these metrics in your business.</p><p>You should switch and get them to do it."</p><p><strong>[00:20:02] Zvi: </strong>the business has to do a lot of work. The business has to actually train a new fine-tuned, potentially, model. It has to learn how to transpose all of the queries. It has to do all of its safety work over again. It has to do all of its, "Make sure this gives the exact answer we want to give to all of these queries," works over again. I think that the difficulty of switching here is going to be, in many ways, non-trivial. The consumer, on the other hand, is a very sloppy product that can afford a lot of slack and error. Where switching over is literally just going to a different website with or without paying a subscription. Right now, only ChatGPT really requires a subscription to get the consumer experience that you want. GPT-3.5 is already pretty good and free even then. You can get GPT-4 from Bing if you really want to, again, for free. Yes, the question is like, "How much is this consumer habit going to be a force more powerful than actual switching costs," at that point. Right now, my guess is it's not very high because the people who are using the product are early adapters. The future is very evenly distributed. Most people still don't really understand what ChatGPT is. You'll see the occasional sign on <em>College GameDay </em>that talks about ChatGPT or the occasional joke that insults someone for being like ChatGPT. People like, "I understood that reference, level of knowledge," but most people still haven't actually tried it.</p><p><strong>[00:21:30] Dan: </strong>To me, at least the most impressive use that I've gotten out of ChatGPT has by far been just coding. I'm a total noob at coding. I don't really know what I'm doing. Asking just basic questions about, "How would I create a flask app in Python? Can you teach me about databases? It'll just give you a step-by-step, "Here are the 10 things that you would do. This would take a typical programmer two weeks." I'm like, "Let's start with step one, break this down into 50 more steps, and then let's chunk it and, just show me the code that I need to input." Then if you copy and paste an error, it tells you exactly why it thinks you're getting the error.</p><p>It understands what IDE you're using and, how to change the settings. It's really miraculous. Do you expect there to be, on some length of time, a massive influx of software founders who realize that, "Oh, I can actually just get an MVP at the door and learn to code in about a tenth or a fifth of the time that I was able to do two years ago." You have people that are maybe more traditionally a McKinsey consultant or work at an investment bank or something like, "The barrier to entry is now, much smaller. This technical co-founder or really technical person that I used to need, that I still need them to join my team at some point, but I can get started on my own." Do you think that this will cause a change with people that aren't natively in tech trying to build software?</p><p><strong>[00:22:52] Zvi: </strong>Yes, I think that we absolutely will. I went from, "There is no way it would make any sense for me to ever try and code almost anything that isn't deeply, deeply simple," to, "If I wasn't so goddamn busy every minute of the day, look at all these cool tools that I think I can just build." I get tempted, even so to be like, "You know what, what if I just did a hackathon for a weekend even though I have no idea what I'm doing?" I have the architecture skill. I am very good at figuring out how computer programs are supposed to work, how they would go about doing a thing.</p><p>I'm just really bad at writing the actual code. I have something at least 10 X, probably, liability to code meaningful things within a given period of time without actually getting a coder to do it. It's plausible it's 100 X, but it just went from basically you can't to "No, really you just can." It's just that I have so many other things that I want to do constantly that are fighting for my attention.</p><p><strong>[00:23:56] Dan: </strong>Yes. I spent a couple of weekends on it a little while ago, around when GPT-4 became publicly available if you're paying for it. Yes, I had the same experience. I was like, "I couldn't code before." I wouldn't say like I'm a good coder, but like it gives you the ability to actually build something in a short period of time, which is pretty astounding.</p><p><strong>[00:24:14] Zvi: </strong>Yes, if I had my current level of coding ability with GPT-4, 5 or 10 years ago, at the time that I was trying to code things, I would have been a very, very good coder very quickly compared to everyone else who didn't have it. I think this leapfrogs you pretty fast.</p><p><strong>[00:24:30] Dan: </strong>Yes, and even if it's not actually directly writing the code, it explains things to you if you're getting an error or something, which in a previous life could take hours for you to get unstuck.</p><p><strong>[00:24:39] Zvi: </strong>No, the biggest thing is just it will actually answer your questions for real. You can ask clarifying questions, and it will tell you and you can sort things out. Especially back then when I was trying to do things that were very much data processing, building in algorithms that did things I wanted to do, not requiring anything that had happened in the last two years, essentially, it would have been very, very good at helping me through almost all of that. Except for the part where I was scraping websites, it would have been just amazing and the scraping would have just been like, "How has this thing changed since its data cut off?" If it hasn't, we're in great shape. If it has, we'll see how well it reads HTML.</p><p><strong>[00:25:25] Dan: </strong>Where do you get the most mundane utility today from AI?</p><p><strong>[00:25:29] Zvi: </strong>I think coding is where I would get it if I was integrating over all of the potential things that someone like me might be doing or something like that. In practice, where I get it is just asking questions, there's something I don't understand that hadn't happened. It doesn't involve a web search. If it's just about like facts about the universe, understanding things, trying to draw parallels, just checking intuitions, getting explanations, really basic stuff. I think that's where I get the most utility out of the system right now. It's really, really good at that.</p><p>Now, if Dall-E 3 represents a quantum leap in usefulness of image generation in a way that I don't think people have fully appreciated yet. We went from I can generate images, but I can't get the thing that I want. I can get some vague evocation of something the thing I want to, "No, I can get the thing I want," actually pretty close with amazing quality is a pretty big change. I am looking very much forward to the world in which I get good at that over time, and the world in which we get the version of Dall-E 3 that isn't safety bound. That I can do the things that it will refuse to do, like give me a picture of a public figure, for example.</p><p><strong>[00:26:58] Dan: </strong>Yes. You seem like pretty optimistic about all the things that you consider, the non-existential FOOM scenario, just mundane utility, a lot of people have the same worries that they had about Facebook or just the internet in general where they're worried about misinformation or bad actors using it in ways that harm politics or something. Am I right here that you're an optimist on most of those concerns for people?</p><p><strong>[00:27:23] Zvi: </strong>Yes. Tyler Cowen, we have a lot of disagreements about the long-term impact, but we agree on most of the short-term impacts. His analogy is the printing press. This is a lot. The printing press and that it greatly enhances our ability to share and process information and to see change. There are going to be people who misuse the printing press. People also talk about, "Look, I typed murder is good actually into a word processor. Oh, no. I called on the phone, and I said a bad word, why isn't somebody doing something about this?" The answer is obviously that's stupid. We understand this. We can deal with it. Yes, it makes these things easier, but that's for humans to solve.</p><p>In the short term, this is in fact just another tool, and it's slightly less of just another tool in some senses, but it's mostly just another tool. I'm confident that we can sort this stuff out, and it will emerge stronger. I'm also very optimistic it can help actively with dealing with these problems. If you are getting a bunch of misinformation all around the web and people are telling you crazy stories and people are giving you crazy theories, you can type that stuff into an LLM and ask it whether or not this is made up, whether it's making any sense. That's pretty good.</p><p>A lot of times there's a lot of social reasons why you can't just ask other people, or you'll get bad answers if you do because you're asking them about the thing that people were fooled about. I'm pretty optimistic that this is going to be not only handleable, but in many ways an improvement.</p><p><strong>[00:28:57] Dan: </strong>Yes. Speaking of Tyler Cowen, so you've written before about how him and Marc Andreessen. They're probably like two of the most-- I don't know what the right word is, but strongly intellectual, really are not worried about AI existential risk to the point of sort of saying that, "We shouldn't even be thinking about it at all. We should just turn the dial up all the way to the right and just go maximum fast." You have this post called the Dial Progress where you're arguing that they realize that you can't deal with nuance. You have two options. You either say like, "We're going to go really hard on progress, or we're not going to go hard on progress at all."</p><p>You'd make the case for, "Maybe we should introduce a little bit of nuance and introduce some more dials." I have some speculation just personally about their public stance. I'd be curious to get your take on it. Tyler loves being Straussian or maybe if he doesn't love being it, he loves to try to look for the Straussian view in other people's statements. If you think about it and there were zero, what you would call serious thinkers that are going all in on AI acceleration, it was just a anon accounts on Twitter or something. When you have someone serious like Marc Andreessen and Tyler Cowen, who usually are pretty rigorous in their thinking, saying that we should just go all the way up, the worry is the counterfactual in their mind that they don't exist.</p><p>Every single smart person who people give credibility to says, "We need to regulate really quick." Then the equilibrium ends up being, "Oh, we just over-regulated it, and we're in a nuclear scenario where we're not realizing the full potential of the technology." By being the two prominent people out there who are saying, "Let's just turn the dial all the way up to the right," it causes people to then, like yourself, make responses to them that are really clear and thought out and introduce more nuance. Maybe they're not saying exactly what they think. I'm curious for your view on this.</p><p><strong>[00:30:51] Zvi: </strong>There are two very different cases here. For Tyler Cowen, I'm pretty confident in what he actually thinks based on interactions in person and in Zoom calls and emails. Also, him being way too smart to think that he is helping if he holds the other view. I think that you're being a little bit unfair in terms of him saying, "We should charge it all the right and just accelerate maximally," I think he has more nuance than that. That he understands that from his perspective, he has to say, "Yay, progress." He has to say, "Yay, AI development," because he sees the alternative is turning out worse in practice.</p><p>He also is trying to push the critics to be better from his perspective, various questions and to take into account various things that he thinks are important and so on. Yes, also, he's a troll, and he's a Straussian, and he actively likes making his readers angry, his words from his conversation of title or summary <strong>[unintelligible 00:31:52]</strong> I'm not putting words in his mouth because it's him. I think he enjoys it. I think he thinks it's good. He thinks it makes people better thinkers. He thinks people actually figure things out better when they get enraged by people saying things like this. It gives a map into the madness.</p><p>I just think that what's going on is also involving a failure to think well about what happens when we get to these future scenarios with very capable artificial intelligences. He just doesn't appreciate what is about to hit him. You see, he's thinking on near-term AI being often very, very good, being very, very grounded, very, very specific. Then him and many, many other economists, in particular, seem to have this thing where then they extrapolate to the future and they go into this reference class mode where they're like, "It's a technological advance that increases human capabilities, and it'll be like all the others and blah, blah, blah, and everything is just going to be normal."</p><p>They just don't engage with the actual arguments for why that's not the case in a real way, they dismiss them without really counterarguing with them other than reference class-style arguments in my mind. Marc Andreessen is a very different other case where I don't think I would call him a careful thinker. I think I would think he's a thinker at all. I think I would say he's someone who's willing to express very strong opinion and strongly held. He is definitely someone who jumps on bandwagons without entirely thinking them through, shall we say.</p><p>You can look at his of crypto and Web3, both in terms of his investments and his talking of his book to see that he will embrace theses that he has not actually very carefully analyzed. He's going to be very pro-progress, very pro-acceleration in general. Also, he talks his book a lot. He is first and foremost a venture capitalist and a businessman. Also, he's clearly wanting also to be a massive, massive troll. If you've seen his Twitter, he does that.</p><p><strong>[00:34:00] Dan: </strong>[laughs] Yes.</p><p><strong>[00:34:03] Zvi: </strong>Also, he has a message. He's hammering it through, and he just doesn't want to hear it when there are alternative messages. If you argue with him and point out where he's wrong, he will often block you. I'm fortunate that that hasn't happened to me, but I've tried to be very careful and nuanced. "Marc, you have my email. One of your partners tried to set us up before this whole thing became a thing. I'm happy to talk and not looking for your money." It should be friendly. Hey, who knows? I really, wish we had a government and a culture that appreciated the thing that people like that are trying to say more generally about everything else.</p><p>I would, in fact, make the trade of we accelerate everything. We build our houses. We develop our medicines, we innovate education. We just make life super better for everyone. Also, we play fast and loose with artificial intelligence relative to what I like. I would happily take that trade. I did a bunch of accelerationists who are like," I'm going to take that trade," but you can't make that trade. I'm like, "I can't make that trade. It's not in my power. I'm sorry. It's too bad." I call them the unworried just to be very neutral to not imply anything. They call them self-accelerationists usually, which is the word that if we made it up for them, it would seem like a really nasty thing to do to them, but they're owning it, so it's fine.</p><p><strong>[00:35:40] Dan: </strong>Yes. It's funny. The original term comes from Nick Land, and it's this dark philosophical idea. Just using it to say, "Let's do LLMs a little bit faster," has always been a little bit funny. [laughs]</p><p><strong>[00:35:54] Zvi: </strong>It goes way back before that. Accelerationism represents the idea originally, as I understand it, of we should make events progress faster even in directions we don't like because we know that the ultimate outcome is favorable to us. In particular, the old joke where there are two communists on the street, and they see a beggar and one of them moves to give them a slice of bread and the other backs his hand away and says, "Don't delay the revolution."</p><p><strong>[00:36:20] Dan: </strong>Peter Thiel had a quote a few years ago that I thought I have opinions on it, but I want to get yours. He said that crypto is libertarian and AI was communists. The idea here is like, Oh, when we think about something like the CCP, they're going to get their hands on AI, and they're going to use it for facial recognition, and it's going to help the totalitarian state, and they're going to be able to monitor everything you do. Crypto is this big, libertarian force where we can take away the money from the government and the Fed can't control it anymore. It's going to give all the power to the individual. Seems like it's going in the opposite direction. I don't have two strikes opinion on crypto.</p><p><strong>[00:36:58] Zvi: </strong>The obvious question I would ask Peter Thiel if he said that in a private conversation, just to probe the intuition is, "Do you think we should avoid building artificial intelligence?"</p><p><strong>[00:37:11] Dan: </strong>Fair enough.</p><p><strong>[00:37:12] Zvi: </strong>Because you talk all day <strong>[inaudible 00:37:14]</strong> about AI are overblown and that we should not be worried about it. He is another unworried person who has many criticisms, I think many of which are deeply, deeply unfair about the people advocating to notice that we might all die and less in the way of object-level explanations for why he thinks we won't. He clearly is not a worried person in this context. There is nothing that Peter Thiel hates more than the the Chinese Communist Party. <strong>[unintelligible 00:37:46]</strong> communism. This is enemy number one. I don't think it's an unreasonable enemy number one to have.</p><p>If you think that AI is communist, you might not want us to be driving the development of artificial intelligence, sir. You might want to speak out about it a lot more than you are. You might want to be funding efforts to not have that happen. I don't see much in the way of that from him. I find it hard to believe he has internalized this thing the way that this is implying. How libertarian is crypto? I think it's a very good question. I think crypto has proven not to be what people thought or expected. It turns out that the useful cryptos are pretty much just becoming centralized under the executive control of governments and everything else and just another means of storing value.</p><p>The original promise of all this stuff has proven to be at least highly questionable. That's not what people want. It's become a tiny portion of the value. I do understand the ethos of the people building it is there. They built certain tools that can be used for those purposes, and there's something to it. For AI, certainly, there are people who claim AI is libertarian, and AI wants to be open source, and AI will set us all free. These people are going to get us killed if they enact their agenda. This would be very, very bad.</p><p><strong>[00:39:16] Dan: </strong>What's the rationale there?</p><p><strong>[00:39:18] Zvi: </strong>How long an answer do you want?</p><p><strong>[00:39:20] Dan: </strong>You tell me.</p><p><strong>[00:39:22] Zvi: </strong>The short version is because AI alignment is an unsolved problem that we do not know how to solve. Even if we did manage to align AI to the wishes of the person who possess the AI, some people will then choose to align their AI to things we very much do not want. We will be unable to control the development of AI and AI capabilities, and we'd be unable to control competitive dynamics between these things. Offense is usually favored over defense in these situations.</p><p>The various dynamics of everybody having their own AI will force competitive pressures upon us that will cause us to hand more and more control over to AIs and have AI increasing their capabilities and reward the AIs in proportion to how much that their behaviors cause them to be deployed more in the future and have more access to more resources in the future. Then all of this leads rapidly to human extinction even in the best case scenarios I can think of.</p><p><strong>[00:40:18] Dan: </strong>I think a lot of folks are familiar just with the overall case for AI doom, why it's very scary to have an intelligence that's smarter than humans exist.</p><p><strong>[00:40:27] Zvi: </strong>Obviously the very basic structure is just we are about to build smarter things, better optimizers, more capable things, more efficient things, more competitive things than us. How do you think that's going to go? If you think that that is a safe thing to do, I do not know what drugs you are on, but I am deeply, deeply confused why you think that's safe. Ignore all of the technical arguments, ignore all the difficulties of alignment, ignore all of it. That is not a safe thing to do.</p><p>How many books and movies and thought experiments and intuitions do you need before you realize that is not a safe thing to do? Arguing that is less than 1% to have humans stop being in control of the future and stop being the dominant force on the planet, you're just not thinking clearly at all, I'm sorry. This just doesn't make any sense, and everybody who tries to do a bunch of math and run a bunch of complicated arguments is just missing the forest from the trees.</p><p><strong>[00:41:29] Dan: </strong>I'm very on board with that stance. I think what I wanted you to drill into is, what do you consider the difference between everyone having their own AI versus there being some centralized AI? For example, if I access ChatGPT in my web browser, is that me having an AI or do I need to go access Llama and download it on my computer to have an AI? What is the difference there between a company being in control of it and individual humans having their own?</p><p><strong>[00:41:59] Zvi: </strong>The difference comes from, am I able to restrict how you can modify, how you can instruct, what you can do with your AI, and how you can utilize or expand its capabilities, and what instructions and methods you can use and how you can deploy it in a meaningful way or not? Do I have any control over your decisions with this AI or is this AI fully under your control? Then to what degree are we willing to use that? If everybody has access to their own instantiation of a fine-tuned GPTN, but that comes with no access to the weights, no right to then just arbitrarily tell it what it can and can't do and reasonable effective alignment controls on that system such that if you ask it to murder a bunch of kids, it'll be like, "No."</p><p>If you ask it, "How do I build a fusion bomb in my backyard?" It'll be like, "No," and so on, or how do I build a smarter AI than this? Like, "No," et cetera, et cetera. Then it's plausible that because we have a central point of defense where only a handful of actors or even one actor are in control of which queries get passed onto this thing, that we can contain the competitive dynamics, we can contain the destructive aims, we can handle the situation, broadly construed, and I'm simplifying this a lot obviously.</p><p>Whereas if everybody has root access, the ability to do their own training on the thing with no checks involved, then you can't put any controls on what people do with it whatsoever. One of the things I repeat over and over again is if you were to release an open source model, you are releasing the completely unaligned except to the user version of that model within two days, period.</p><p>Because there are very, very easy ways to fine-tune that model such that you remove all of the alignment. If you give LAMA 2 in its base case, it'll refuse to speak Arabic because it's worried that it's associated with violence, which is itself kind of racist. In the name of trying to be harmless, it's actually deeply, deeply racist, which is funny. If you spend two days, and I think the record is a few hours at this point, fine tuning it, you can get it to answer actual anything that you want and it's no different from Mistral. The system that's designed to just do literally anything that you asked for. If that knowledge is there, if that skill is there, if that ability is there, it's yours.</p><p><strong>[00:44:50] Dan: </strong>Yes, but you also are an optimist just generally about this mundane utility. Are you fine with these models existing today or what is the GPT number equivalence of open-source where you estimate we need to stop open-sourcing?</p><p><strong>[00:45:06] Zvi: </strong>I think this is one of those cases where-- one of the things that Connor Leahy likes to talk about is, it's very true. There are only two ways to respond to an exponential, too early and too late. Responding to the exponential exactly on time saying we put the COVID restrictions in on exactly the right day or exactly the right hour is impossible. That's wrong, that's too late. You need to respond too early, and also, there's a pattern here. If we get into the habit of doing this thing, it can be very, very hard to put the brakes on it.</p><p>If I could set a open-source hard limit of capabilities permanently, where do I think the ideal number would be? 3 1/2 is probably fine. Probably enough fine that I'm willing to say sure. If I have full control over stopping afterwards. Four, I'm nervous about what you might be able to do with it. Again, we're talking about existentially it's 99 point, it's probably 3. It's 2 to 3.9 safe at this point to make open source GPT-4, but not 4, even now, because you don't know what people can do with it once you release it, because the problem is you're releasing GPT-4 but you're also releasing everything they can do by iterating on GPT-4 to improve it.</p><p>You don't really know how many dumb things OpenAI did from the perspective of 10 years from now, that we can then 10 years from now do a lot better. You can't unopen- source GPT-4 once you've done it. Who's to say what we could iterate? The problem is you don't get to stop progress where you put the halt number on. That's why open-sourcing is so dangerous. They can keep going off of what you've already released. They've done the expensive hard part of the work in some important sense already and you don't know where that stops.</p><p>I would say I'd rather stop around 3 1/2. I would be only somewhat nervous, is my guess, if we stopped at 4. If we open-source GPT-4.5, I think we have at most 1.9 of we don't all die. We certainly don't have 2, so we should stop pretty soon. I don't want to play with fire on that level.</p><p><strong>[00:47:19] Dan: </strong>Makes sense. Related question, if we just paused development completely today on both closed source and open-source, do you think that the next 10 years would see a specifically higher GDP growth exclusively due to AI than otherwise? Basically what I'm wondering here is how much mundane utility basically is still hidden that we just haven't actually tapped into yet? Because these things just take a long time for businesses to get ahold of and people to figure out how to use them, and there's all sorts of applications you need to build around it. It's not just a thing that comes out and you have the API and then boom, progress.</p><p><strong>[00:48:02] Zvi: </strong>The answer is, oh, a lot. Not hard take-off triple digit GDP growth or anything, and probably not double digit GDP growth worldwide or in the US just from this current set of things. If you tell me, okay, GPT-4, as it's currently constituted, is going to be the most capable foundation model for the next 10 years because some genie is going to stop every attempt at a superior training run and it mysteriously just won't work. What happens to GDP? Yes, I expect substantial boosts in GDP growth from the adoption of this technology. We can argue over whether that's on the order of 0.1%, 1% or 10%.</p><p>My guess is it's north of 1% and south of 10%, but it's really hard to predict going forward because you don't know the pace of adaptation, you don't know the counterfactual, you don't know what regulations will come down on these things, you don't know how a lot of players will react to them, and you also don't know to what extent the productivity will show up in the GDP statistics, which is always a question. Is the programmer efficiency jumping through the roof showing up in GDP statistics? Not entirely. I think that's a very strange question to answer properly.</p><p>I do think this is already a substantial boost in terms of if you were planning your tax rates and your strategies for how to respond to the economic conditions. You should be much more optimistic than you would be if these things weren't happening.</p><p><strong>[00:49:41] Dan: </strong>Yes, there's a post that is just like catnip to me. I find it really good that you commented on, which is basically the claim that AGI timelines would cause real interest rates to be high. The reason here is interest rates go higher or the real interest rate goes higher when either A, the time discount is high, or B, future growth is</p><p>expected to be high. What's interesting about this case is it doesn't really matter if the AGI is aligned or not, because if everyone thinks the world is ending next year, you don't care about saving your money, and then the interest rate goes up. The idea is both scenarios should increase real interest rates. You introduced a lot of nuance, and this is what I definitely agree with, which is the efficient market hypothesis, maybe not the best model to try and inject this idea into. There are several other points about why this didn't make great sense, but my question for you is, do you think that there's some point where this model actually is right, where GPTs 5, 6 comes out, and you say, "Okay, real interest rate has not moved up high. Now I'm willing to make this trade," or do the details here in your post where you list a number of reasons why this isn't a good trade always hold true?</p><p><strong>[00:50:57] Zvi: </strong>I think if that happened, my response would be, there's a better trade. The trade is, borrow all the money you can borrow at currently considered reasonable interest rates, and invest it in AI. Again, ignore existential risk and the implications that you might increase existential risk, you're just asking for the efficient market question. Clearly, if GPT-5 and 6 come out, and GDP growth accelerates, and interest rates aren't going higher, that means that we're not investing enough in taking advantage of these technologies. There isn't enough money flowing in. There's not enough competition for that. This money will have greatly oversized profits.</p><p>I'm just thinking economically only here again, just for emphasis. Of course, interest rates will still be going up in the future as things continue to accelerate, and exponentials happen a bit fast. You could easily have a situation in which the impact of AI in this sense is doubling every-- starts at a year or two, and then the doublings accelerate. That the interest rate market is so big that it takes a while to overwhelm the interest rate market, but then this happens very fast. Once people start to see it happening, they start pricing in that it will happen again in the future for real. Then interest rates go completely nuts.</p><p>If you have managed to borrow money, it might not be that different from just keeping the money you borrowed in some important sense, or a large portion of it. That already happened in the last few years. I have a mortgage on my apartment where I pay 2 1/2% fixed rate for 30 years. I could, in an efficient market, sell that back to the bank for $0.60 on the dollar. Even though I am definitely paying my mortgage, just the fact that I can invest in treasuries that earn 5 1/2% means they are taking a huge bath on this. They should very much be willing to buy their way out of it. Of course, for various reasons, including taxes, I have no interest in doing that. Also it's impossible logistically.</p><p>The point being, interest rates also have gone up. You also could, in fact, argue that interest rates have gone up. I haven't actually said this out loud, but perhaps this solves the economic mystery of 2023. People are just wrong about this not happening, right? What is happening in 2023? Every economist thought soft lending was going to be extremely difficult. Every economist thought that you would cause a recession if you pushed Fed rates into the 5s this rapidly, like this. What did we get? We got a strong job market. We got strong demand. We got declining inflation, right?</p><p>Money is still worth something because there's more goods for that number of dollars to chase. Inflation went down, in some important sense, but we saw the natural rate of interest, in some sense, go up. We've seen interest rates rise without damaging the economy. One could say what's actually happening is artificial intelligence is creating a lot of great investment opportunities and ways to get higher returns on your money and various reasons to prefer money now to money in the future, thus raising interest rates a significant amount. This is happening in the background, but the Fed is also raising their baseline rate of interest.</p><p>We didn't notice this is why the interest rates are going up in some sense, because the Fed does dictate the actual interest rates in some important sense by moving last. They see the AI impact and then adjust their base rates. This is the reason we're surviving it without a recession. This is the reason why Biden might get reelected in '24, despite what the Fed is doing, which would otherwise be a huge problem for him, is because AI. The answer to why isn't this happening, and I probably need to write this up now that I'm saying it out loud, perhaps the reason why AI isn't raising interest rates is that it's raising interest rates.</p><p><strong>[00:55:12] Dan: </strong>It's that it's actually doing it.</p><p><strong>[00:55:14] Zvi: </strong>It already happened.</p><p><strong>[00:55:15] Dan: </strong>Yes, actually, it did. These things are just so complex, though, right, because my counter to it, and I was actually-- I once read a post several years ago that I thought was really interesting, which is, why are interest rates so historically low? This is obviously not relevant anymore, but it actually could apply to AGI, the conclusion that this guy drew. He said, because maybe the market is pricing in that we cure aging, so now the time preference is completely flipped, people don't care about having their money now, they're going to live forever. You've got to also make the case that AGI, maybe that's not the specific thing that it does, but it would probably do some pretty weird things, so to just take it--</p><p><strong>[00:55:56] Zvi: </strong>I would just cut us off and say that's not how traders think, not enough of them. There's no freaking way.</p><p><strong>[00:56:04] Dan: </strong>Well, that's the more macro point that you make, which is, yes, even that paper that is saying AGI should increase interest, even that is stretching how people that are trading for a living are probably actually doing on a day to day basis.</p><p><strong>[00:56:19] Zvi: </strong>It's entirely possible that AI is having the impact now because what's happened now has, in fact, awoken them to enough mundane utility-style advancements and accelerations that it's affecting interest rates now. I think that's pretty plausible now that I say it out loud. I don't think that they're pricing in the long-term stuff very well, and they're certainly not pricing in stuff like curing aging. Nobody thinks that way. Nobody's ever think that way. Markets are much more myopic than that, always have been.</p><p>One of my favorite anecdotal data points is you look at what happened to the stock market and other financial indicators during the Cuban Missile Crisis, when the future got very, very different, and things basically didn't move very much. Everyone just sort of acted like, "Oh, I'm sure it'll work itself out somehow. If we all go together then we go," or something, so don't worry about it and just trade as if the missiles aren't going to fly almost entirely. Things moved by a few percent at most, even though the president's going around saying the chances of nuclear war are between 1 in 3 and 1 in 2 in the next two weeks. Everybody was paying attention to that full stop, so why should we expect people to respond to AI in these huge ways? It doesn't make any sense.</p><p><strong>[00:57:31] Dan: </strong>Yes, that is true. If you're not around here to see tomorrow, why care about selling now? You might as well just take the optimistic bet.</p><p><strong>[00:57:40] Zvi: </strong>Yes, and just generally, people have this very much normalcy bias. Even if you look at media, when people predict how people were going to do these things, the world is going to end in a week, and most of the people just go about their day. There's not much to be done. You don't get much benefit from uprooting everything. Life is what life is, and you can do things on the margin, but mostly you're better off pretending the world isn't going to end until it suddenly ends from a just experiential perspective than suddenly blowing it all on hookers and beer, right? It's just not a very good way to be happy or produce anything useful.</p><p><strong>[00:58:23] Dan: </strong>Yes. I saw a tweet a day or two ago that basically said the GPT model, Turbo Instruct, can play chess at a level of about 1,800, and it's been previously reported that GPT actually cannot play chess, but it looks like it was just the reinforcement learning models that are bad at it. The implication here would be basically that reinforcement learning actually hurts reasoning abilities. What's going on here? Do you think this is just a specific case of reinforcement learning happened to hurt chess reasoning capabilities, or do you think there's a broader lobotomizing that's going on?</p><p><strong>[00:59:00] Zvi: </strong>There are other considerations, one of which is that someone at some point checked an evaluation of ChatGPT's Instruct model into their GitHub that checks chess. We should not put it past them to have somehow fine-tuned its ability to play chess so that it suddenly can play chess much better. We can't rule that out. We don't know. It's possible that this is a much more straightforward case, but we definitely see a degradation of reasoning abilities, and I write about this in the next column somewhat already, based on ROHF.</p><p>In general, whenever you train a model to do something despite it not making any logical sense, the model is going to learn not to make as much logical sense. to not value logical sense as much is going to impair its ability to think. I would point out here that this is</p><p>also true in humans. To just jump to the thing that I think people really should notice is, if you tell me I have to go around the world pretending things are true that aren't true, pretending not to notice correlations that are in fact accurate correlations, and living in a world in which logic just breaks down in weird ways constantly, I am not going to be able to turn that logical impairment entirely off when I move to other realms where you think I should be fine. We should beware when asking for this kind of special pleading on various issues of the damage that we are doing.</p><p><strong>[01:00:42] Dan: </strong>Got it. You've written before about the difference between this hardcore rationalist updating, like take your priors and update based on new information versus what I think the commenter that you were referencing called the Ilya or Steve Jobs mentality of relentless optimism. Rationalists believe that you should consciously update your beliefs in the face of new information. Just quit and admit when you're wrong. Ilya or Steve Jobs though, they're way more focused on just like, solve the problem at all costs. We're just going to figure it out until it's right. I think your commentary was like, maybe we're just actually, like both of these people, are being rational and it's more a matter of vocabulary in the way that they are expressing the way that they're attempting to solve problems. Clearly to me, it seems like there is-- people tend to have one mentality or the other. My question to you is, which is better for getting things done?</p><p><strong>[01:01:36] Zvi: </strong>I was a startup founder and a hardcore rationalist at the same time, where I was simultaneously holding in my head both beliefs. I had the belief of obviously we'll probably fail utterly. This is not going so great. In fact, it was not for the most part going so great. It did not end up going so great when all things were said and done, and simultaneously, the relentless optimism of, but we should act as if it's going to work so that we make good decisions that cause it to work in some sense.</p><p>What's going on with Ilya and Steve Jobs could be thought of as this brain hack of, I know, for the purposes where I need to know it, that this is not guaranteed to work, but I know that acting the way I would if I assumed it was going to work will cause me to make decisions that make it more likely to work and more likely to have better results. I will do that while having this other process in my head that keeps an eye on when it's going to do something that's actually crazy because the assumption isn't true and stop it from happening.</p><p>A person like Steve Jobs, you don't actually see them doing things like borrowing from mobsters who will shoot them if the prototype doesn't work, because we know it's going to work, right? No, Steve Jobs realizes that's dumb. Ilya realizes that's dumb. Metaphorically, things that are less blatant than that, but that they don't set themselves up such that if the thing doesn't work, disasters happen. They just say, "Oh, this is going to work. Let's do the thing that will cause it to work," and use this to work their long days and drive everybody and keep everybody's morale up and figure out what the right ideas are and so on.</p><p>When exhibited that way, I would say they've found a way to make the hybrid work for them in that sense. I use it too in my own way. I emphasize more of when you talk to me, you'll get the rationalist words outputting out of my mouth more often when I'm doing the hybrid than them. In fact, I think they believe in more of the act as if it's just going to work thing. I don't see a particular acting for real as if their probability is 99%.</p><p>If you thought your probability of succeeding was 9%, you wouldn't get out of bed in the morning. The odds say it's definitely worth it. Let's do this. If you have to say to yourself it's 90, then sure. Don't say it's 99.9, not where it counts.</p><p><strong>[01:04:17] Dan: </strong>As a startup founder and someone who writes like crazy prolifically, what is the Zvi Mowshowitz production function? How do you get so much done?</p><p><strong>[01:04:26] Zvi: </strong>Part of it is just practice, practice, practice, certainly. I would say I am constantly working in the sense of like every other creative, I'm constantly looking for inputs. I'm constantly thinking about what something implies, what do I have to say about it. The basic workflow is I have three giant monitors because I found that a few very, very large monitors are better than a lot of small monitors. I have a lot of open tabs and a lot of open tab groups. I'm continuously scanning for various-- I'm scanning various RSS feeds in Twitter, in my inbox, and other sources for new information and news sources.</p><p>When I find the news source, I put it in the appropriate group, or if it's like, "I know exactly what to do with it," and I have the time right now, I'll just put it in immediately. I then organize these things logically as I go. Then sometimes I'll go over them and I'll write more or I'll edit or whatever. Then periodically I'll say here's a compilation of the things on this topic or these related topics that I've had. Then I'll edit it and organize it into a unified whole and I'll put it out there. Then every now and then, I wish I did this more, but it's hard to do. You'll find a concrete isolated thing and you'll write that up in more detail and you'll push that through, and it'll have more coherence and hopefully stand the test of time more.</p><p>Mostly it's just relentlessly being able to break up-- Part of it is just being able to hold this stuff in your head enough that you can reference it and then break it up into chunks so that the moment I-- I have just a list of it <strong>[unintelligible 01:05:58]</strong> I'm like, "Oh, I know where that goes. I know what this relates to this week. I know how to add this, integrate this in, and how this impacts the other things I was saying and how I need to move other stuff around based on that." You just slowly build up a superstructure that way. A lot of it is just you iterate. I had years of doing it for COVID. That transferred mostly pretty cleanly.</p><p><strong>[01:06:19] Dan: </strong>At what point do you think LLMs will be a significant input to that?</p><p><strong>[01:06:24] Zvi: </strong>They're a significant input already in the sense that when I want to answer certain types of questions or learn certain types of things, I will use LLMs rather than use Google or asking a person because it's faster and more efficient. Often I will check intuitions using LLMs. I should be using LLMs more for things like grammar and other things like that, but I haven't figured out how the workflow is worthwhile there just given how I happen to work. In particular, I use LLMs often for asking questions about papers that are clearly not worth actually reading, because you can't stop and read every 40-page paper that comes across your desk. You just don't have the time.</p><p>You can do things like control F for the words extinction and existential, but that's really not a good check of whether they're saying things about that. You can ask Claude and Claude will be pretty good at identifying whether or not they <strong>[unintelligible 01:07:21]</strong> If you ask it, does this paper address this topic at all? That's the kind of question it's very, very good at answering the question and pointing you to vaguely where it does that. Then if it does, you read the section. If it doesn't, you say, "Okay, it doesn't," or ask how does it do this particular technique? It's a much better search function than control F, as long as you then read the text afterwards. Again, it's definitely a boost. If you're asking when the LLM can write the damn thing, we're not close. We're nowhere close. It's not obvious we get there before the end.</p><p><strong>[01:08:01] Dan: </strong>What version of GPT along with fine-tuning and saying, here's every news source that I am likely to look at. Here's the last 200 AI newsletters that I've read. I want you to search all these news sources, aggregate them into similar categories, and write the newsletter. You think that's actually not till AGI or do you think [crosstalk]</p><p><strong>[01:08:19] Zvi: </strong>Oh, I was talking about actually writing the words, actually doing the analysis, creating the outputs, I'm not sure that's not AI complete.</p><p><strong>[01:08:28] Dan: </strong>Got it. Yes.</p><p><strong>[01:08:30] Zvi: </strong>The thing you described is probably doable now in a useful way. The problem is that it takes a very high quality threshold before that is better than nothing. You have this problem of it's very hard to hill climb on it. If I had a version that was like, this is useful, but could be more useful if it was better, but it's worth using now, or at least not too bad to use now. It's not as efficient maybe, but it's definitely serving a purpose, then I could hill climb on that and iterate and get to where I want, as opposed to, no, I have to put in a bunch of programming work while it's terrible. I still have to check all my usable sources anyway before it gets to the point.</p><p>I always use chronological feeds on all of my social media, and RSS and so on, I don't use any kind of AI style filters, except very, very crude, like stop spamming me style filters at most, precisely because a lousy filter just means you need to check the damn unfiltered version anyway. It accomplishes nothing. We're not there yet. I could probably be well-served by having an LLM scour the rest of the internet for highly plausibly contextually important things and having it present them to me, especially if it also checks for redundancy with my current feeds. Again, that requires a bunch of coding work and a bunch of iteration and a bunch of time</p><p>upfront investment. I at least haven't chosen to do that myself. I'm accepting grants if somebody really wants to supercharge things and wants to bump me up to the point where I can just hire engineers, but that's definitely not in my price range right now.</p><p><strong>[01:10:16] Dan: </strong>I've had Robin Hanson on this show before, and his views come off quite strange to someone who's not initiated to his style of thinking. He's very okay with this idea that humans will merge with AI and that our descendants will be totally unrecognizable from us today. He also thinks that's totally fine. He doesn't have any moral qualms about it whatsoever. It seems to me like that's an extreme case, but it feels like at some point all alignment questions end up boiling down to political or moral arguments about what a good future actually looks like and these really deep questions about, what does it mean to live a meaningful life for humans? Do you agree that that's true? Is there a way where we can live in a more pluralistic world where we do today?</p><p><strong>[01:11:04] Zvi: </strong>Several things to address there. The first thing is, I think he would agree with me strongly that when we say merge with AI, that's almost always people talking nonsense. It's like in <em>The Mass Effect 3</em> ending where you merge. You're saying words, but it doesn't really mean anything. You have no idea what you're saying or how that translates or what that would operationalize as. Mostly you're just imagining something that narratively you vibey want to happen, but that isn't a thing. I still think it's the correct ending to choose because the other endings definitely kill you. Regardless of the extended version where they don't kill you, or at least don't kill you immediately, it's just not actually accurate to think that way. It's like, okay, this was meant to be something smarter than it technically is. Maybe it could be something, but the others are <strong>[unintelligible 01:11:54]</strong> Anyway, the way I would describe Robin Hanson's position is, yes, there are not going to be any humans. There's not going to be anything that resembles a human all that much, and that's okay. I'm highly in favor of moving towards this goal. If you believe that is what's going to happen by default, then yes, a huge portion of what you decide comes down to the question of, but that's good actually. Do we agree with this?</p><p>He thinks we'll be able to leave legacies through this artificial intelligence. That meaning if we reflect things that we are or cared about, I don't think that's true in the Hansonian style scenarios. If you get the scenario Hanson's envisioning, which I think is plausible. I think it's very plausible that we'll end up in a Hansonian future if we don't do something about it. I think that's bad. That the legacies that we're talking about, things like, oh, we kept a lot of aspects of the Linux kernel because it was just easier to build off of what we had rather than starting from scratch.</p><p>That does not have any value to me. I don't feel like our legacy has been preserved because we kept some of the Linux kernel or anything of that nature or anything else I can think of that would survive the evolutionary competitive pressures that Hanson is imagining would happen to the AIs. It would cause them to diverge so much from us. Yes, I think you-- I would say, does it boil down, your central question, does it boil down to value questions? I think the answer is not entirely, but they're important. We have these strong disagreements about what actions, including the default case, with what probabilities lead to what types of outcomes?</p><p>What futures are we headed towards? What are the possibilities open to us? How do we steer between those possibilities? Are all questions that are-- well, hell if we know, right? To a large extent, we're trying to figure it out. We have our best guesses. We disagree about that. A lot of the disagreements are very, very intelligent, genuine with good points on both sides. Then there are also the vast group-- while simultaneously the vast majority of people have views that have not actually thought these things through and are very ill-considered or completely disingenuous.</p><p>Among people who have considered it, the actual ways to steer this towards different things, those are tricky. Knowing what you can do to steer towards various outcomes is tricky. In fact, we have the question of which intermediate steps lead to what type of lived experiences and outcomes and what things would be present in that future universe over what timeframe? Very good questions. Now, if you could agree on all those questions, somehow, you would then have this boiled down to a moral issue. You would then be able to say, okay, Zvi and Robin agree on what would happen given what decisions by humanity. Humanity is here to make a decision based on that.</p><p>We have to decide if we should go with Zvi's world, which has in it much less optimized intelligences and has the following potential vulnerabilities and long-term disequilibria and other things, but he thinks it provides a lot more value, or Robin's world, which has these other things that Robin thinks provide a lot more value, which world should we choose? We could have a debate and make a decision, for some value of we. We're not there yet. We're nowhere near the galaxy. We're still very much in the we don't know what's going to happen. I think that it is an important question to sort out.</p><p>Do you think that AI is getting control of the future and there no longer being humans, us not having great grandchildren is good, actually, compared to the alternative? There are people for several different reasons who say yes. There are the antinatalists, negative utilitarians who think that humanity just suffers. The ultimate Buddhists in some broad metaphorical sense. You have the people who think that the AI has more value than we do because it's more intelligent. Then you have the people who think that evolution is the thing that matters or is valuable, and who are we to interfere with it or something like that.</p><p>You have the people who think, well, I care about myself and my own short-term future. I don't care what happens in the long run. If this allows me to enjoy the next 10 years of cool AIs in it, then I don't care. You have the people who think that right now humans have value, but the humanity that holds back singularity would be so crippled and morally broken and experientially broken and disheartened. Some combination of these things, that it wouldn't have value or have a negative value, and therefore we're better off just letting it take its course in some sense.</p><p>You have the people who value nature and think that humans are bad for nature and that AI would be good for nature. They are wrong, both because they're wrong about what's valuable and they're wrong that AI would preserve nature. It won't. It will wipe it out so much more efficiently than humans ever could. If we have these scenarios, there's several other things of that ilk. Then within the people who want to preserve humanity, there are those who want to preserve humanity in order to do certain specific things or for certain ways of life or whatever, and people who want it to be so that people can do whatever they want in some important sense.</p><p>There are people who-- but they don't know. Who think like, "Oh, we want humans to flourish. We want humans to enjoy vibrant, interesting, complex lives." We don't know exactly how to do that. We're punting that until we do, because that's a really, really hard problem. We've been having this conversation since long before. I'm largely in that camp. When I ask myself what I value, I have a lot of meaningful intuition pumps and things to say about that. I don't pretend to have all the answers.</p><p>I do think it's much easier to tell things you don't value and you don't want than to know exactly what it is you do want. That we'd be in much better shape if we could specify what human values were and what we actually cared about in a way that we could have it be interpreted by an AI or by another human. I don't think we're there yet at all. I think that'd be my response.</p><p><strong>[01:18:28] Dan: </strong>Zvi, this has been an awesome conversation. Thank you so much for coming on the show. Really enjoyed it.</p><p><strong>[01:18:33] Zvi: </strong>Thank you. I had a good time. You ask different questions. I always want people who ask different questions.</p>]]></content:encoded></item><item><title><![CDATA[Rohit Krishnan]]></title><description><![CDATA[Listen now (58 mins) | Efficient organizations, influencing curiosity, and the future of VC]]></description><link>https://www.danschulz.co/p/4-rohit-krishnan</link><guid isPermaLink="false">https://www.danschulz.co/p/4-rohit-krishnan</guid><dc:creator><![CDATA[Dan]]></dc:creator><pubDate>Tue, 22 Aug 2023 02:30:42 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/136290606/98cde09d8cc64adf4e54448f168dc2f4.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<div id="youtube2-gAhHGX0vvns" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;gAhHGX0vvns&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/gAhHGX0vvns?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/the-world-of-yesterday/id1693303954?i=1000625206118&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000625206118.jpg&quot;,&quot;title&quot;:&quot;4 - Rohit Krishnan&quot;,&quot;podcastTitle&quot;:&quot;The World of Yesterday&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:3491000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/4-rohit-krishnan/id1693303954?i=1000625206118&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2023-08-22T02:30:42Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/the-world-of-yesterday/id1693303954?i=1000625206118" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8ad80701bf9ce379526bcd1566&quot;,&quot;title&quot;:&quot;4 - Rohit Krishnan&quot;,&quot;subtitle&quot;:&quot;Dan Schulz&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/5ZirswhD1D2ZC8jaqWmG7b&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/5ZirswhD1D2ZC8jaqWmG7b" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><h2>Timestamps</h2><p>(00:00) Intro</p><p>(00:17) Bullshit jobs</p><p>(04:36) SBF on employee efficiency</p><p>(11:02) AI on employee efficiency</p><p>(14:56) Efficient organizations as a thesis</p><p>(19:53) Are people or more less curious today?</p><p>(25:10) Firms influencing curiosity</p><p>(34:06) Return to office</p><p>(39:00) Future of VC</p><p>(47:23) Trading on margin</p><p>(55:13) Role of VC in building companies</p><h2>Links</h2><ul><li><p><a href="https://www.strangeloopcanon.com/">Rohit&#8217;s blog - Strange Loop Canon</a></p></li><li><p><a href="https://twitter.com/krishnanrohit">Rohit&#8217;s Twitter</a></p></li><li><p><a href="https://www.amazon.com/Bullshit-Jobs-Theory-David-Graeber/dp/150114331X">David Graeber, Bullshit Jobs</a></p></li><li><p><a href="https://www.amazon.com/Debt-First-5-000-Years/dp/1612191290">David Graeber, Debt</a></p></li></ul>]]></content:encoded></item><item><title><![CDATA[Steve Hsu]]></title><description><![CDATA[Listen now (108 mins) | Polygenic scores, gene editing, and human flourishing]]></description><link>https://www.danschulz.co/p/3-steve-hsu</link><guid isPermaLink="false">https://www.danschulz.co/p/3-steve-hsu</guid><dc:creator><![CDATA[Dan]]></dc:creator><pubDate>Thu, 17 Aug 2023 22:31:07 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/136175555/77402cf66eb08d96c32494549eff1d6b.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<div id="youtube2-J2FtPvyOlkI" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;J2FtPvyOlkI&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/J2FtPvyOlkI?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/the-world-of-yesterday/id1693303954?i=1000624814274&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000624814274.jpg&quot;,&quot;title&quot;:&quot;3 - Steve Hsu&quot;,&quot;podcastTitle&quot;:&quot;The World of Yesterday&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:6493000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/3-steve-hsu/id1693303954?i=1000624814274&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2023-08-17T22:31:07Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/the-world-of-yesterday/id1693303954?i=1000624814274" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8ac2e8b2620bdcdd4b4ba9213c&quot;,&quot;title&quot;:&quot;3 - Steve Hsu&quot;,&quot;subtitle&quot;:&quot;Dan Schulz&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/5wsk9tdR82YbO4hZWAaQTV&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/5wsk9tdR82YbO4hZWAaQTV" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><p></p><h2>Timestamps</h2><ul><li><p>(0:00:00) Intro</p></li><li><p>(0:00:33) Genomic Prediction</p></li><li><p>(0:05:54) IVF</p></li><li><p>(0:12:34) Phenotypic data</p></li><li><p>(0:15:42) Predicting height</p></li><li><p>(0:28:27) Pleiotropy</p></li><li><p>(0:39:14) Optimism</p></li><li><p>(0:45:03) Gene editing</p></li><li><p>(0:48:27) Super intelligent humans</p></li><li><p>(1:01:27) Regulation</p></li><li><p>(1:06:36) Human values</p></li><li><p>(1:17:38) Should you do IVF?</p></li><li><p>(1:26:06) 23andMe</p></li><li><p>(1:29:03) Jeff Bezos</p></li><li><p>(1:34:29) Richard Feynman</p></li><li><p>(1:43:43) Where are the superstar physicists?</p></li><li><p>(1:45:37) Is physics a good field to get into?</p></li></ul><h2>Links</h2><ul><li><p><a href="https://twitter.com/hsu_steve">Steve&#8217;s Twitter</a></p></li><li><p><a href="https://infoproc.blogspot.com/">Steve&#8217;s Blog</a></p></li><li><p><a href="https://scholar.google.com/citations?user=8-KGcykAAAAJ&amp;hl=en">Steve on Google Scholar</a></p></li></ul><h2>Transcript</h2><p><strong>[00:00:00] Dan: </strong>All right. Today I'm talking with Steve Hsu. By day, Steve is a professor of theoretical physics at Michigan State University but he also runs a blog called <em><a href="https://infoproc.blogspot.com/">Information Processing</a></em>, hosts his own podcast called <em><a href="https://www.manifold1.com/">Manifold</a>,</em> has founded several technology startups, including Genomic Prediction and SuperFocus. Beyond physics, he's an expert in machine learning and computational genomics. Steve, I've really been looking forward to this one. Welcome.</p><p><strong>[00:00:23] Steve Hsu: </strong>My pleasure.</p><p><strong>[00:00:25] Dan: </strong>Let's get right into your work of genetics. Maybe a good entry point here is with your startup. Do you mind talking a little bit about the background on Genomic Prediction?</p><p><strong>[00:00:33] Steve: </strong>Sure. Genomic Prediction was founded, gosh, must be five, almost six years ago. Our motivation was that on the research side, I and my research group and some collaborators had been working on something called polygenic scores. I guess it's now called polygenic scores. From our perspective, it was more or less just AIML on big datasets, like hundreds of thousands of people where you have their genome and then you have some phenotype information or disease history, information about that individual. We were trying to do the most basic thing, which is, if you think about the DNA revolution, what would you like to be able to do with this DNA? What use is this DNA?</p><p>Well, you'd like to be able to take the DNA of an individual person and predict aspects of that person just based on the DNA, like whether they're high risk for prostate cancer or whether they're taller than average, or whether they're prone to obesity. These are all really just fundamental questions that you'd be interested in from a basic science perspective. We've done both theoretical and empirical work, actual work with large datasets, and were increasingly confident that it was going to be possible to predict to some practical accuracy lots of complex traits describing an individual person.</p><p>If you now think about this, from the business perspective, is there actually any remunerative use of this capability aside from just publishing papers in <em>Nature</em>? What would that be? A unique thing about IVF is that when a family is going through IVF, it is typical for them to have more embryos. Not every family. I don't want to-- Obviously, there are some families that really struggle with this and they're lucky to get one viable embryo, but it's common, or not uncommon, for families to have many more embryos than they want to use. At that stage where the embryo is just like 100 cells, it's very little to distinguish two embryos from each other. What is that family supposed to do?</p><p>The traditional method of picking which embryo to transfer was to consult the human embryologist who has lots of "experience" doing this. By the way, this is a theme if you read my blog about "expert judgment" and "fake expert judgment," in which if you study data science and machine learning, you immediately realize a lot of times humans think they have signal in their prediction and they don't have any signal in their prediction. Typical embryologists would say, "Oh, embryo three, that looks like a good one. Embryo two is a little rough around the edges. Look, it's asymmetrical. There's a little glob of extra cells on the left there. Let's not use that."</p><p>Actually, if you look into the literature, there's essentially zero support, rigorous statistical support for human embryologists in general or even more specifically, the embryologist at the clinic where you are having any better-than-chance capability to make that decision for you. This is something our critics often forget, is that the thing that we're trying to surpass, the benchmark we're trying to surpass is random chance, is like no information at all as to which embryo is better than the other.</p><p>What we knew would be the case is that once you could genotype these embryos, the genomic predictors or polygenic scores that we had built would give you quite a bit more information about, for example, the future health prospects, risks, their risks associated with a particular embryo versus one of its brothers or sisters. That was a logic, <strong>[unintelligible 00:04:45]</strong> logic that already there's a lot of IVF going on, already there's a lot of people facing what we call the embryo choice problem.</p><p>There's essentially zero information as far as we can tell being applied to a real information, being applied to solve the embryo choice problem, but polygenic scores and polygenic risk factors and all that stuff would enable us much, much better than <strong>[inaudible 00:05:11] </strong>technology. That's the basis of where the company is from.</p><p><strong>[00:05:15] Dan: </strong>It seems like this work in genomics is at the center of a couple of different disciplines. We're at this, like, what seems to be a happy coincidence with multiple different accelerating technologies right now. I want to pick this apart one by one. Let's maybe actually start with IVF because this one to me, a priority, seems like it would be one of the hardest ones to shoot up a human female with a bunch of hormones and get 20 plus eggs or something, but this has been around for a little while. Can you just describe a little bit more about how common IVF is and a little bit more about what's going on with that right now?</p><p><strong>[00:05:54] Steve: </strong>Now, just to be completely clear, because I'm a scientist and a professor, I don't take any credit for IVF technology that was largely pre-existing before our company was founded. Although we are making actually important contributions now to IVF, overall, the basic wet lab technology of how to do that, how to produce embryos and transfer them is not due to us. I will say that on the board of scientific advisors to genomic prediction is one of the scientists who was on the first team that did the very first IVF, produced the very first IVF baby in the UK.</p><p>We have that person on our advisory board. Our patient advocate in the company, Elizabeth Carr, is the first IVF baby born in the United States. She's the first US IVF baby and she just celebrated her 40th birthday not that long ago. It's been around for a while. The basic observation is that by administering the hormone cycle to a woman, you can cause her to overproduce eggs in her active cycle, and those can be easily harvested. It's basically a nurse with a long needle that is able to harvest those eggs and they're fertilized outside the woman's body. In vitro with the lab.</p><p>They're fertilized by a technician and then you generate potentially a large number of embryos, which then are allowed to grow until they're typically about 100 cells. By the time we had come onto the market, it was becoming more and more common, certainly at the best clinics, to freeze the embryos once they reach roughly 100 cells. The reason for that is to give the mother's body some time to recover from the hormonal cycle that it just went through before that transfer occurs. That was found to get better results. Already, the natural cycle is, there's going to be a lag period where the embryos are frozen in liquid nitrogen.</p><p>By the way, freezing and thawing of embryos seems to not damage them any way. They seem to work just as well. They're very robust little molecular machines. Basically, you freeze them in liquid nitrogen, you thaw them, and generally, you might have some loss, but generally, it doesn't damage them. Now, it had already before we entered the IVF scene come commonplace for a small biopsy of a few cells to be taken from the part of the embryo which is going to become the placenta. It's not actually the child, but it has cells which have the same DNA as a child that's going to become the placenta later on.</p><p>The biopsy would be taken from that for genetic screening purposes. In our startup design, if you're a startup guy and you're introducing a new technology, you don't want to disrupt the pre-existing workflow of the industry. You want to fit your new innovation in such a way that it doesn't disrupt what had already been there. That same biopsy if sent to a genetic prediction, we would amplify the DNA to the point where we'd be able to get a whole genome genotype for the first time. Actually, whole genome genotype for each embryo, and then compute all that predictive polygenic scores and stuff like that.</p><p>That's a quick sketch for people who are not really familiar with IVF. In terms of utilization, there are some countries where about 10% of all babies born now are coming from IVF. This is a consequence of women having careers, delaying marriage, obtaining more education, just being older when they finally get to the point where they're ready to start a family. Women's fertility starts decaying. It's just the truth. It's something people are generally not aware of, women's fertility tends to decay already starting typically in the late 20s, early 30s. That might shock you because you're like, "Wait, we have this plan and my wife is going to be 35 when we have our kid." Well, you better look up the statistics.</p><p>If you're just a little bit unlucky, by the time you're 35, you could have real fertility problems. Almost everybody has fertility challenges before they turn 40. Where you are and when your fertility decline begins and accelerates, it obviously depends on the individual, but 30 to 40 is typically where that's happening.</p><p>Now it's increasingly common or especially like highly educated professionals to require IVF. In countries like Denmark, Israel, the state health care system actually supports the pay for IVF. In those countries, it's widely utilized. The percentage of all births that come via IVF is high. It's like 10%.</p><p>In the US and other developed countries, it might be like 5% or 3%. It's a significant number. If you're not familiar with IVF and I just take you to some kindergarten and I point to the kids, there will be IVF kids among that group of kids on the playground that we're looking at. It's a non-negligible component of how humans reproduce today.</p><p><strong>[00:11:38] Dan: </strong>Okay. I think that's important point to send home. This is this is happening today. People do IVF and you'll be given a set of eggs. You have a choice, which is do you want to pick which one you want to use? What you all are doing is giving them options to say, based on some data, we can give you information about what is likely to happen based on which selection you make.</p><p>I'd be interesting to get a little bit into how you all make those predictions. There's kind of two components, right? There's actually gathering the data. We need a bunch of genotypes with labeled phenotypes for different traits. Then, we also need the machine learning models. Actually, do the AWS run or whatever, right? Let's maybe talk about the data first. I'm really curious.</p><p>I noticed that in your papers and others, the UK Biobank is referenced a lot. Can you talk a little bit about where data exists today and where most people are getting it to do research?</p><p><strong>[00:12:35] Steve: </strong>Yes. Most of the data that's available for researchers like us to do our work comes from government-funded large biobanks. UK Biobank is probably the best known one. There's also more than one funded by the NIH. There's one called All of Us, which is approaching, I think-- Well, it's not quite at a million, but the target is to get to a million individuals.</p><p>There's something called the Million Veterans Project, which is run by the health care arm of the Veterans Administration, which is also around a million people. My lab in particular also collaborates with the Taiwan biobank. In Taiwan, we have available something like a million genotyped individuals to analyze. There's a biobank in Finland. There's one in Japan. It's all all over the place, really.</p><p>However, it is true that most scientists in this field spend a lot of time trying to get access to data. A lot of what we're doing is data limited, not algorithm limited. I would say the innovations developed by my group in terms of ML algorithms to do this kind of work. They're already good enough, actually, to work quite well. The main thing we're waiting for is actually just to get more data.</p><p>You might say, "Oh, well, wait. Don't you have millions of people to analyze?" Yes. We need to break it down to, say you're studying a particular disease. What you really need to get signal is you need to compare the genotypes of cases. Cases in medical research means individuals with the disease condition and control, people who don't have the disease condition.</p><p>If the incidents of the disease is 1% or 5%, suddenly, the number of cases is much smaller than the total size of your biobank, right? Most of your signal is really coming from the cases. In the case, for example, if you're setting cognitive ability, it's very hard to get cognitive scores, for example, for individual samples. It's a complicated field. It's now a big field.</p><p>When we started, when we published our first paper on height in 2017 where we successfully predicted height with a pretty good accuracy, a few centimeter accuracy, which was a shock to the research community. From that point where we were one of only a few groups really focused on polygenic scores. Now, I think each year there must be, I'm guessing toward a thousand issue papers published every year which are doing something with polygenic scores. Now it's become a big international area of research.</p><p><strong>[00:15:17] Dan: </strong>Let's dig in actually to the height thing because this seemed to me actually a really big deal. Tell me if I'm getting this right. Was it that you all predicted how much data ahead of time you would need to accurately predict height? Then, you got that data and you were able to accurately predict it. Is that roughly how this went? That seems like a really big deal. Could have implications for other traits moving forward as well.</p><p><strong>[00:15:42] Steve: </strong>Correct. I won't get the years exactly right. I'll get them roughly right. I think in 2014-ish, around 2014, maybe it was a little bit earlier than that, we published some theoretical papers. At that time, there were no big biobanks available for analysis, but there were smaller data sets.</p><p>We did some theoretical work. There's a bunch of fancy math here that goes under the name compressed sensing or L1 penalized regression. There are even papers, very well known papers by this-- in which one of the co-authors is this Fields medalist Terry Tao on this subject. There's a purely mathematical question of if I give you noisy data of a certain kind and you use a particular set of algorithms, how much data do you need to solve the problem to recover the signal fully? That's a signal processing problem or you could think of it as information theory problem, whatever you want, problem in analysis. It's pretty widely studied mostly in the math, applied math community, a little bit in electro engineering, in computer science. Much less so in genomics, even computation genomics.</p><p>Very few people in the field of computational genomics are aware of these compressed sensing theorems, which I was aware of. Actually in deciding to work in this field-- that was about when I decided to work in this field -- being a theoretical physicist, I said I'm not just going to randomly choose some area to start working in, I need to understand theoretically what is possible? If I'm going to invest some years of my effort in this direction, I want some theoretical guidance that, oh, it could turn out like you need hundreds of millions of genotypes to get anywhere. If that's true, when I do this back of the envelope calculations, it's going happen for a long time, 20, 30 years. So not interesting to me.</p><p>If the answer had come out like that, I would have been, like, let some biologist slave over this, whatever. I did that preparatory theoretical work. That is common in physics. In physics, theory is very highly developed. Before you do anything, you do preparatory theoretical work to understand the problem and then you decide, is this worth doing? Is it worth building this accelerator?</p><p>Is it worth putting this satellite in space to look at the Big Bang? We need to understand a bit more about the problem before we invest resources in it. Biomedicine is completed the opposite. Biomedicine is just wild-ass speculations about stuff. Blowing huge sums of money. NIH has more money than all the other sciences and engineering research budgets combined.</p><p>Senators understand dying of cancer, but they don't understand anything else. Basically NIH gets all the money and then biomedicine, it's just throwing throwing money in the air like this. They don't have any theoretical understanding of what they're doing. They're just trying stuff. They'll reject what I'm saying.</p><p>This is what any theoretical physicist or mathematician or computer science who gets involved in biology will tell you. They're actually in a way not respected. They wouldn't respect a mathematician because generally math is not that useful for them. Math doesn't help them with what they're doing. They have to do this very experiment-intensive, empirically driven stuff.</p><p>That's just how their field is because living systems are so complex. What I'm saying is I think not politically correct but it's easily verifiably true. For my own effort I said let me do this theoretical analysis first to figure out whether solving the genome, in other words, predicting phenotype from phenotype, is that a solvable problem with realistic amounts of data and compute that I'm going to have available to me? Or should I just give it a skip and continue thinking about black holes and quantum fields?</p><p>I went through that. We published our papers, I think by 2012 or 2014, we had published the results. Which is using some very fancy math with something called the Donoho &amp; Tanner phase transition in compressed sensing. We were able to use that to calculate and predict how much data it would take to "solve" a complex trait using realistic genomic data. We predicted that as soon as we had at least a few hundred thousand genotype individuals and we had height measurements, we would be able to build a reasonably good height predictor. The very first moment when that data set of that kind became available was 2017 when the UK Biobank released its first instances and allowed researchers to apply for access.</p><p>Within a month of getting access to UK Biobank data, we had built predictors that had this few centimeter accuracy. It was shocking to the genomics community. If you look at the journal <em>Genetics</em> where we publish the paper, <em>Genetics</em> is the preeminent genetics publication in the United States for those things. It's the journal of the American Society of Human Genetics. It's the leading journal in the US specifically about genetics.</p><p>Our article is an editor's starred article or something. They have some way of putting a gold star on an article that it means the editor thinks this is an important advancement in the field. That's an editor's selected article in the fall-- I forgot what it was. I think we posted the pre-print in the fall of 2017, and the paper was published in early 2018. It's an editor's selected article. I have the referee report, so you can see the referees saying like, "Wow, I can't believe this is possible."</p><p>People now, so younger researchers who maybe entered the field since then or maybe they were already just starting grad school back then, they will say, and maybe they're even being sincere, but they don't know the history of their own field, they will claim that none of this is surprising. Like, "Oh, we always knew we were going to be able to do this," and something like that. It's all bullshit because if you were in the field in 2017, people were saying like, "Oh, wow, there's all this missing heritability. I don't know how we're ever going to solve this missing heritability problem. I think all genes interact with all other genes, so therefore, modeling complex traits will never be possible."</p><p>If you're a serious historian of science, you can go back and analyze what were people saying in 2015, 2016, 2017. Up until the point our paper came out, you can see they thought what we were doing was impossible. It's easy for people to verify this. Now, they will deny it because now polygenic score is everything. It's a big deal. It's a big research area, and they'll claim, "Oh, we knew this all along," whatever. This is like insider baseball for how science works.</p><p>There's a joke that says the reaction to scientific discoveries go like this. First reaction, "It's wrong. You can't do that. What are you talking about, Steve? All genes interact with all other genes. You'll never be able to predict this. It's highly non-linear." Number one, it's wrong. Then the next reaction is, "Maybe you're right, but it's trivial. I knew it all along, right? I knew it all along. Of course, you did, but you did it, but it's not that big a step. We all knew it was possible." It's wrong. It's trivial. Then the final one is, "I did it first."</p><p><strong>[00:23:32] Dan: </strong>[laughs]</p><p><strong>[00:23:34] Steve: </strong>The person says, like, "You were wrong," and then, "Oh, your result is trivial." He's now saying, "I did it first. Please reference my paper." That's like a common cycle for acceptance of new scientific insights. I'm not the first person to formulate it that way. Even people like, I don't know, Heisenberg and Planck and other people said the same thing. Quantum mechanics, they said, "It's wrong." "Okay, you guys might be right, but this is trivial," and then like, "Oh, wait, I did it first. I knew it was going to work like that."</p><p>Anyway, so we predicted how much data we would need. When that amount of data became available, we solved the problem, and now, our result was replicated by several other prominent groups, and now, it's even been replicated more thoroughly by a study which used, I think, several million individuals for whom they had height. Now, it's basically incontrovertible that trait as complex as human height, for which you'd need to know the state of about 10,000 individual regions of your genome, so about 10,000 different genetic variants that are used by the ultimate predictor in order to calculate the estimated height. There's 10,000 individual, low side, different effect sizes.</p><p>Now, again, in 2017 or 2015, people would have said it is insane to think that crazy data science AI/ML guys will be able to build something that complicated that depends on-- think about biologists, right? How many biologists can count to 10,000, right? They would say, like, "It's insane. Genes all interact with each other. You guys will never be able to build a predictor that uses 10,000 different genetic variants scattered over all chromosomes that impact this really complicated trait," but now, it's been fully validated. Anybody who knows about computational genomics agrees. There are very good height predictors. They use about 10,000 SNPs.</p><p>Furthermore, even though I just stated, like, "It's impossibly complex from the viewpoint of a lab biologist," from the viewpoint of information theory, from a computer science viewpoint, it's incredibly simple because there are 3 billion different variants that could have affected height, 3 billion. 10,000 out of 3 billion is a very low fraction of all regions of DNA that actually matter.</p><p>Actually, as we predicted, height would turn out to be this sparse trait. All traits, actually, all complex traits of humans are sparse. They only depend on a small fraction of the 3 billion variants that exist in the genome. Furthermore, the effects are largely additive. It's a largely linear model that accounts for this. It's not even this weirdly non-linear thing <strong>[unintelligible 00:26:49].</strong></p><p>Anyway, in a way height is the poster child for what can be done in complex trait genomics. It's pretty well understood, at least by the people who are experts in this field. I think the understanding of that is not propagated that far outside of computational biologists or, really, computational genomicists who really actually <strong>[unintelligible 00:27:14]</strong> computer scientists and statisticians, to old school genetics scientists who made their whole career about one gene or something, they don't use a lot of math, that group doesn't fully understand what I just said to you. They're going out to other biologists who don't deal with DNA data on a regular basis. They have no idea what I just did. Anyway, so that's my history of this field <strong>[unintelligible 00:27:40].</strong></p><p><strong>[00:27:41] Dan: </strong>I can remember back in-- I was the grade school in the 2000s, but there was a big deal about the Human Genome Project, right? There was like Dolly, the clone sheep, and it seems like there was all this optimism about the field, and then it went dark for the next 10 or 20 years. Everyone got excited about the iPhone or whatever. Now, this stuff is so recent, and what has been shocking me as a layman is exactly what you said. Intuitively, I would think that these polygenic traits, they're just impossibly complicated, and maybe if you reduce the risk of diabetes, you're jacking up cancer or heart disease or something, but you guys have also shown that typically, that is not true. You could even create a generalized health score. In some cases, there's even mild optimistic effects for other health measures. Is that right as well?</p><p><strong>[00:28:27] Steve: </strong>Yes. It's a great point that you're raising. I just want to point out that someone who approaches this subject with an open mind, and I think your background is in engineering and software development. If you have decent math chops and a scientific technical background and you just approach this area with an open mind, you can read papers written by our group or other groups in the field and understand, like, "What is being said here now about genetic architecture?"</p><p>Because up till now, although we could read out genomes, we didn't really know what they did, right? That was the Human Genome Project that when you were a kid, around 2000, they sequenced the first human genome. Those guys then, as always, with all biotech entrepreneurs and tech entrepreneurs overhyping what's happening, right? That was just the ability to read out an individual DNA.</p><p>The analysis by which you figure out, "What can I predict based on that?" took this next 20 years, basically, and accumulation of large numbers of genomes, right? Another fairy tale that was invented by biologists when they literally knew nothing about-- the first genome had not been sequenced, they knew nothing about machine learning, they knew nothing about compressed sensing, et cetera, there was this weird fairy tale invented by them called pleiotropy. Pleiotropy says, a particular gene or a particular locus in your genome is bound to have multiple effects. Right? Exactly why that had to be true-- Okay. You could say it this way, a gene, by definition, is a region of DNA that codes for a protein and there are only about 20,000 different genes in the human genome. Of course, between each gene there are tons of other DNA switches that are doing other things which people, until recently, had no idea what was going on with these other things.</p><p>Of course, from an information theoretic standpoint, you realize it can't just be the information in the protein-coding regions, there had to be these other switches that are controlling things. Otherwise, how could I be that different from a worm? Even though we have pretty much the same-- we're using the same proteins, different, slightly different variations of these proteins, me and the worm, right? But we're very different, right? There must be other information that's involved in making me versus making the worm.</p><p>I think to be totally blunt, a high IQ kid who's 12 years old who reads a popular book about DNA would understand this, but it's still today confusing to most biologists. There was this feeling that because these genes are everything, so for a long time these guys thought genes are everything and this is junk DNA, okay? Genes are everything, but there are only 20,000 genes.</p><p>Therefore, if I modify one gene, I have to affect lots of different things because for sure that protein is playing a role in lots of different things going on in your body. Hence the idea of pleiotropy. Now, once you have predictors that-- we have predictors for 20, 30, 40 different complex traits ranging from diabetes risk, schizophrenia risk, to height to BMI. You can just look at the predictors, and they're sparse.</p><p>Remember, they use only a fraction of all the information that's on your genome. Each one of them is only using a fraction. Now, the fraction is scattered all over the genome on your chromosomes, but it's not using most of the SNPs, most of the genetic variants on your genome. You can just ask this trivial question. You can say like, "Well, how many of the SNPs used for height are common to the predictor built for schizophrenia? How many of those are common to the predictor built for heart disease? How many of those are common to the one that's predicting your diabetes risk? Or whether you have brown hair?"</p><p>Someone, again, who knows a little linear algebra, but most biologists don't know any linear algebra, would say, "Oh, why not compute a correlation matrix between these predictors? Look at the correlation between variance accounted for between these different predictors, and let's see how disjoint-- using a fancy mathematical term, let's see how disjoint are the input SNPs, the sets of input SNPs for each of these different phenotypes, different traits.</p><p>How disjoint are they? Are they very disjoint, in which case pleiotropy is bullshit, or are they hopelessly not disjoint and overlapping, in which case pleiotropy is correct. Turns out they're largely disjoint. Now, how many biologists actually understand this today? Very, very few. You have to understand something about like information theory or maybe at least linear algebra, correlation matrices, and polygenic risk predictors or trait predictors.</p><p>Then combining that, it's trivial to then, "Oh, gee, that old term in the textbook I had in graduate school called pleiotropy, whatever happened to that?" Then if you understand these three or four different conceptual areas, you can just immediately answer the question, but again, hasn't propagated very far. Now, another way to say this is the following. It's just like basic math that when two physicists talk, we're always talking like this kind of math, but again, not everybody can do this.</p><p>If I'm trying to explain all these results to some guy who his day job is studying black holes in Anti-de Sitter Space or something, and he's like, "Steve, I hear you've been dabbling in this biology stuff. Why are you doing that?" Then like, "Well, what did you guys find out?" I'm explaining all this and then the guy will immediately ask me, he'll say, "Well, between any two humans, how many individual variations are there? My genome and your genome, how many differences are there?"</p><p>The answer is it's on the order of a few million out of 3 billion base pairs. It's about one per thousand. There are millions of ways in which my genome is different from yours, right? If the most complex trait that we found so far, which is height, uses 10,000 of those variations, if I divide a few million, let's suppose the average predictor, polygenic predictor is using a few thousand genetic variants. Let's suppose I divide a few million by a few thousand, the answer is 1,000.</p><p>Physicists are always doing these <strong>[unintelligible 00:35:21]</strong> math calculations, right? It turns out there's enough information between any pair of two humans to specify about 1,000 different complex traits if those complex traits were all independent of each other. It's a trivial information theory calculation that says the common variation found, differences between two randomly selected humans is about enough to account for 1,000 completely independent complex traits which are roughly like the complex traits which we have studied so far.</p><p>That means if you're playing Dungeons &amp; Dragons, and normally in Dungeons &amp; Dragons, your strength is independent from your dex, which is independent from your intelligence, which is independent from your wisdom, which is independent from your charisma. I forgot if there are more. Constitution, maybe it's six. Characters in D&amp;D live in a six-dimensional independent-- six independent dimensions, right?</p><p>It looks like humans, given the amount of information in our genomes and the amount that we vary from each other, could live in 1,000-dimensional space of complex traits. Even assuming exactly zero pleiotropy, exactly zero pleiotropy, where each set is entirely situated from the other. I don't know if I went through that too fast, but it's kind of trivial logic. Trivial for some people logic that says, wait a minute, there might be some pleiotropy but actually, there are a lot of independent ways that I can tune people's genomes and get what I want.</p><p>The size of my spleen is actually not directly related to my IQ and the length of this finger could be somewhat independent of the length of this finger. There are 1,000 different things, ballpark, that could be basically not interfering with each other or could be interfering only weakly with each other. That's the situation which if you look at papers we wrote like 2020 and 2021 or something, all this is laid out in those papers, but not understood by a lot of people.</p><p>By the way, part of the problem is that to get money from NIH, you have to focus in on one narrow thing, maybe one disease or something. Then people who know the diabetes polygenic risk index, they know that, but they know nothing about heart disease. They know nothing about height. They would never study height. They would not study IQ. They don't even know what IQ is.</p><p>Of course, if you're a jeweler with these magnifying glasses and you're looking only at one part of this watch, you don't know what the weather is outside. That is kind of how <strong>[unintelligible 00:38:08]</strong> specialized in biomedical sciences.</p><p><strong>[00:38:14] Dan: </strong>This seems like-- I keep using the word, but happy coincidences. It's a miracle, right? It's like, oh, wow, there's actually room for potentially a lot of important traits that we can just maximize with potentially no downside. I don't know if you're like a sci-fi person and you get excited, maybe some people are repulsed by this, but if you get excited about it and think it could improve human outcomes in the future.</p><p>I'm curious, let's put the societal barriers and constraints aside. Things like data collection and general ethics of choosing between embryos and things like this, but from a purely technical point of view, what do you think are the biggest barriers to us in the future being able to have people that have substantially better health outcomes and they're more fit, they're better looking, they're smarter. Do you see any real potential technical gotchas or are we looking at mostly an issue of ethics and data?</p><p><strong>[00:39:14] Steve: </strong>I think you have the right perspective on this that in a way things turned out in the most positive way. Because it could have been that the encoding of genomes is so complicated we would never decipher it or we would need 100 times more data than we have today in order to decipher even much simpler traits. That could have been the case.</p><p>That was the main thing I was worried about. I was mainly worried about nonlinear interactions making the code space much more complicated than it turned out to be. That was my main worry in 2015 or something like that. Then once we got the data and started working with it, we realized, nah, it ain't like that. I gave an interview to <em>The Sunday Time</em>s a while back, and the title of it was something like yes, superhumans are possible, because that's what the science is telling us.</p><p>Now, again, most people are not aware that the science is strongly pointing toward the answer being yes, superhumans are possible, but I'm telling you, yes, superhumans are possible. If you look at one of the things we did in my lab is we created an overall health index. We took genomic predictors for 20 different disease risks, the major disease risks that kill people. We took about 20 of them and we created an index summing over each of the risks for those major disease conditions. Then weighted by the life expectancy reduction typically associated with those conditions. Most impactful has the biggest coefficient and the least impactful has the smallest coefficient. You just sum it up and you make a health index.</p><p>Similar things have been studied by other groups too, like the Finns also studied this. They have a nice study of their Finnish population gene bank, also using a similar longevity index. It looks like to us because, again, the individual SNPs that are controlling health risk one, like heart disease, versus diabetes, number two, lung cancer, all these things, most of those sets of SNPs are disjoint. It looks to like us like we could, just using what we already know, the information that's in the predictors as it is, you could hypothesize a human who's a low-risk outlier for each of those 20.</p><p>When we genotype people or embryos, we can just compute this health score, right? We can see like, "Oh, Johnny is low risk for this, medium risk for this. Oh, poor Johnny. He's a high-risk outlier for high blood pressure." We can do that. Now, there's no reason as far as we can tell judging by the specific genetic variants, the individual SNPs that are involved, there's no reason there cannot be a person who's botton-- one percentile risk, low risk for each of these 20 patients. Who knows how long that person would live. That person might live 150 years.</p><p>The number of people who have been simultaneously low risk for all of these conditions that have ever lived might be zero. There haven't been enough humans around to realize the luck involved in being a one-percentile positive outlier in terms of low risk for all of these killer conditions. Maybe that person has never lived. Maybe it's 0.01 to the 20th chance of this happening or something, right?</p><p>You could get there in principle by engineering or by having a huge number of embryos we're able to choose from. As far as we can tell, this particular phenotype, longevity, or disease risk, that's not what people are afraid of. People are not afraid of for us to talk about this, you can even get funding from NIH to talk about longevity predictor, whatever. The ones people don't like are the ones like intelligence. The question is like, "Oh, well, if I do this trick with intelligence, I look at the predictor--" Genetic predictors for intelligence are not that strong yet, but you can at least see some trends.</p><p>It does look like there's a lot of variation of the graphs there, and you could shift the mean. You could move an individual, many, many standard deviations if you wanted to. In principle, if you had perfectly accurate editing capability, how far could you shift an individual in their IQ score? The answer is you could shift them really far, probably beyond any human genius, the historical genius that has ever lived. That particular analysis, all of a sudden now, if you're a left-wing extremist, whatever, or woke person, you would say, "Oh, my god, you're talking about shifting intelligence. You must be a eugenicist." Like, "You must secretly salute the Nazi flag in your basement or something." That one's a little more fraught, but the math is basically the same math we were talking about when we were talking about longevity.</p><p><strong>[00:44:34] Dan: </strong>Let's talk about intelligence. We've been building up to this, but I think this is where it gets really, really interesting. One of these technologies that we need other than the data and the training runs is we also need the ability to either pick from a bunch of embryos, which is going to be a generation-by-generation improvement, or we can edit an existing embryo. You mentioned it in that response, but we haven't talked much about editing. What does the current state of editing look like?</p><p><strong>[00:45:03] Steve: </strong>Yes. Of course, editing a human embryo is a big no-no. Certain guys spent a few years in jail after doing this in China. Obviously, the first thing we should say is, well, this is not considered an ethically okay thing to do. We're just talking about science fiction. You and I just happen to be interested in the character Khan from Star Trek. You're too young to remember this, but anyway. Okay, we're having a science fiction discussion. It's about Dune. In Dune, they were trying to breed the Kwisatz Haderach, okay? That's the novel Dune. We're talking about science fiction now.</p><p>Currently, there's been continued improvement in CRISPR technology. The thing that you're contemplating though is a situation where you might make hundreds of edits to an embryo. There are a hundred different or hundreds of different places where you figure out, "Okay, I want to make a change here." Then you do it. Currently, the technology, as far as I know, does not exist to do that without also having a significant risk of off-target edits, so edits that were unintended by you. There's a wet lab gene editing, their biological limitation right now to our technology that makes this hard.</p><p>The other limitation, which is a little bit subtle, is that when we build these predictors, we're using a particular SNP, the state of your genome at a particular place in order to help us make the prediction, but we don't know that that particular SNP that we're using in the predictor is itself causal. The state of nearby SNPs, because they tend to be inherited together, are correlated. We might be using a tag SNP in order to do the prediction. We only need correlations to do the prediction. We might be using a tag SNP to do the prediction, but the causal SNP is actually next to it. If we edit the tag SNP, but not the causal SNP, we get no effect.</p><p>The more difficult problem, which is more of an information-theoretic or computational problem, is to determine not just which snips are enough for me to predict the phenotype, but which ones are actually causal because I actually want to change the phenotype. That is also an unsolved problem. That problem will require a lot more data than we currently have because of this problem that it's hard to tell. In almost all the people in our gene bank, the state of the SNP is the same as the state of-- these two states are correlated like at 0.9. It's very hard for me to tell whether this is the causal one or this is the causal one. You have little clusters of these things in your genome. There's a technical problem at the computation level that is a roadblock to gene editing supermen.</p><p><strong>[00:48:12] Dan: </strong>Okay, so gene editing, it sounds like there's still some work to be done. What about if we just iterated over a bunch of IVF generations? I think there was a paper done on this actually that tried to figure out how many standard deviations you could get out of intelligence.</p><p><strong>[00:48:27] Steve: </strong>Well, it depends on a lot of parameters like how many embryos you have to choose from and then also how good is your predictor for making the selection. These calculations are pretty straightforward. Once you make the model assumptions, you could do the calculations. The problem is we don't know the parameters because we don't know how good our predictor for intelligence will be in 2030 or 2035, et cetera. There are a lot of unknown parameters that would have to feed into the calculation.</p><p><strong>[00:48:57] Dan: </strong>How good do you think it'll be for intelligence if we had just straight IQ tests, the best IQ tests that we have today?</p><p><strong>[00:49:03] Steve: </strong>If you had well-phenotyped individuals, so they had been given even just the SAT or some very good IQ test, cognitive test, and you had-- my estimate is if you had, let's say, a few million people with such phenotypes attached, you could then build an IQ predictor with accuracy maybe plus or minus 10 IQ points. From that, then you could start selecting. Well, it also depends on how many IVF resources you give to people. If it becomes a social convention for women to freeze a bunch of eggs, do extraction cycles when they're 20 and maybe they could get 100 eggs and freeze them and then use them later in life. You're selecting the best out of 100 and you have a pretty good predictor. You could be moving the mean in the population, like, one standard deviation every generation. That would be huge because in a few generations you would have an unrecognizable human population. Like you'd have-- The whole population of the planet is like the student body of MIT or something. Yes, of course this is all hypothetical science fiction and possibly morally wrong.</p><p><strong>[00:50:25] Dan: </strong>Just based on the human species, how many standard deviations is it possible to go-- to push this intuition a little bit, if you think of like, if we just engineered dogs, presumably there would be a limit and we wouldn't have dogs that could-- Maybe they could get close to us today, but presumably they won't have the same limit that humans do. Is there a way to think about that?</p><p><strong>[00:50:47] Steve: </strong>It's pretty hard to do it from a mathematical perspective. What we can conclude by the fact that the level of polygenicity of cognitive ability is probably at least as great as height, so it's probably at least 10,000 different variants that are controlling the common variation in cognitive ability. There's a little math involved in this next inference, but it turns out it's the square root of that number that determines how many standard deviations are up for grabs. If you take the square root of 10,000, let's say very conservatively, you could say there are at least 30, maybe 100 standard deviations up for grabs. You could shift the mean.</p><p>Now again, like a standard deviation is 15 IQ points. If you shift 30 standard deviations, the IQ went up by 30 times 15, which is 450 IQ points, and we have no idea what the hell that means, because we're used to thinking about variations of 10, 15, 20, 30 IQ points, not 400 IQ points. The inference that you can make is that there's an unimaginable amount of variation up for grabs.</p><p>If you think like, that's crazy, Steve, some other limiting factors are going to intercede before you get plus 100 or plus 200 IQ points. Might be true, but our experience in agricultural breeding where the similar analysis applies, so if you look at a plant-- by the way, this whole field that I'm talking about, which again, I'm always like taking the piss from these stupid wokesters, is like, they don't like me talking about this stuff, but if they go down to the Monsanto lab or the Iowa State University Agricultural Breeding Center, people are doing essentially the same mathematics with plants and animals in an agricultural setting. Oops, I guess it's not BS.</p><p>In the agricultural setting, they have a similar situation where the milk production of a cow or the number of eggs laid per month by a chicken or the rate of growth of corn or the size of the ear of the corn or the drought resistance of the corn, they can tell by the same analysis I just gave you for IQ, they can tell that there are many, many standard deviations up for grabs.</p><p>If they aggressively start breeding these plants or using polygenic scores for selection, which by the way, now has become completely standard in breeding of dairy cattle and stuff like this. I think maybe even in for some cases like breeding chickens and stuff like this. There are many, many cases of multi-multi-standard deviation shifts that have been accomplished, actually accomplished by animal breeders and plant breeders. The eggs that you ate for breakfast are laid by chickens that lay almost one egg per day. In the while they might lay like one egg per month or maybe a couple eggs per month, but these chickens are laying an egg almost every day. The chickens that are populating all of our farms, they're ravenously hungry. They just want to eat and lay eggs. They are in the wild population less than one in a million, one in a billion wild chickens from the old population were anything like the modal chicken on a farm today.</p><p>That tells you right away, like stuff that's unimaginable, like when you look at the wild guy, can be produced through controlled genetics. This is just true. If you ask any animal science guy, it's people who do animal breeding, plant breeding, they'll just tell you yes, of course, that's how we have agricultural revolutions and that's why farms are so productive today and yada, yada, yada, yada. I think it's very unlikely that as a purely scientific statement, what I'm saying about what's possible with human intelligence, I think it's very unlikely that I'm wrong about this.</p><p>Now, it could be like, "Oh, the humans that are plus 100 IQ points, whatever, they start to have to have bigger brains and their their skulls have to be bigger." Brain size is correlated at about 0.4 with IQ. Other things could happen that you don't like. Like, oh, maybe all women would have to have cesarean sections in the future because the babies' brains are getting so big because you did this weird genetic editing to them. I'm not endorsing any of this, but the fact that there are many standard deviation up for grabs, I think for people who actually understand the science is not arguable.</p><p><strong>[00:55:51] Dan: </strong>Just so people understand, a 130 IQ, as you said, that's like two standard deviations, so that's like you're in the top 2.5% of the population. A 145 score, you're in the top 0.14%, you're getting into 160 to 200 is like Einstein, speculative on some of the smartest people of all time. What this is showing is using the same logic we do with the chickens that we eat today in my kitchen, you could theoretically push this to get somebody that's up in the 500s or higher.</p><p><strong>[00:56:27] Steve: </strong>Yes. Now, I don't know what it means to have an IQ of 500 or this or that, but I know that person probably is a hell of a lot smarter than me.</p><p>[laughter]</p><p>Now the other funny aspect of this conversation, it's like, if you can tell like I'm sick of talking to biologists and stuff or biomedical people, like yes, it's true. I spend most of my time these days working on things more related to AGI and AI and large language models, stuff like that. There, if you say like, "Oh, you know what, we're going to 10X the training data and 10X the compute resources, and I have a slightly better algorithm, am I going to get an AI which is significantly better, smarter than GPT-4?" People are like, "Yes, wait, Steve, that's what we're doing. What are you talking about?" Oh, oh, you were in biologist mode or something, you couldn't understand things for a while. Like why would that be shocking to you?</p><p>Why would it-- like right now, GPT is not good at certain things, but the guys who are working on this are very confident that we're going to pass some threshold and suddenly the new models will be magically really good at some of these things, like theory of mind or doing mathematics or whatever it is. That's not shocking to them. For people who think very deeply about intelligence, the operation of an information processing neural network, which by the way, my brain is one, and your brain is one. People who think about that stuff don't think it's crazy to talk about something that is a quantum leap in capability beyond the previous one due to some improvements.</p><p>Well, could those improvements be encoded by genetic changes in the organism? Why not? Why the hell not? Like why is it that my dog will never understand doorknobs, but my kid will understand the doorknob by the time he's two or three years old or something? Wow, it must be God or something, the Bible or what could possibly explain that? Oh, I guess it has something to do with DNA, except we shouldn't talk about that for reasons or something. Yes, it's kind of ludicrous.</p><p>The idea, like there are people, there are biologists who, like, they'll read some popular article about genetically engineering super geniuses, and then they'll give you like five fallacious reasons why it's impossible that we'll ever genetically-engineer superhumans. They'll exhibit, they'll put their own stupidity on public display by giving you like five fallacious reasons that have nothing to do with the problem. Which if you map their reasoning onto what people are doing, training neural networks, it's just obvious what they're saying is stupid, but anyway.</p><p><strong>[00:59:15] Dan: </strong>I mean you could stretch this analogy because it plays into the whole concerns about AGI, it's like, well who's going to get smarter faster? [laughs]</p><p><strong>[00:59:25] Steve: </strong>Oh, this is the main question. I think for the most deep thinking minds in our society today, main question is at what point will machinic intelligences overtake human intelligences? One question. Second question, is there any hope for the wet intelligences to improve themselves, to affect that race? The race between-- we're not getting better, we're actually getting worse if you calculate carefully, and then they're getting better, but the fact that we could get better at accelerated rate through these biological technologies. The average people don't want to talk about this, but serious people are thinking about stuff like this. Yes, that's a question of our age actually.</p><p><strong>[01:00:22] Dan: </strong>It's also almost like symbiotic too because some of the hardest problems of creating smart intelligence are human intelligence to build the thing to hit escape velocity. You get smarter humans, they could be the ones that are now pushing the machines a little bit further.</p><p><strong>[01:00:39] Steve: </strong>A conversation that I regularly have with billionaires who are very focused on AI is whether improved humans, like if we suddenly were able to make much smarter humans, would have a better chance of solving the alignment problem so that the risk of the machinic intelligence is doing bad things to us is reduced. Should we try to slow down AI developments so that improvements in human intelligence can have a better shot at solving the alignment problem? Not solving but improving engineering around the alignment problem. These are serious conversations people have. Again, it sounds like science fiction, but serious people are actually having these conversations.</p><p><strong>[01:01:27] Dan: </strong>It's here and maybe this is a good segue into we laid the groundwork for what is possible, but we glossed over some of the societal challenges to actually getting here. One thing I wanted to ask you about is when I think of the really groundbreaking stuff, it seems like you have nuclear on one end and we've proven to be pretty good at regulating it. It's very physical. It's not easy to get your hands on plutonium if you don't know what you're doing or have special authorization. Then you have AI on the other, which is the jury's still out.</p><p>It's like once you do a training run, you can gate the thing behind an API, but unless you're confiscating like H100s or something, it's very difficult to actually understand how you're going to stop people from doing these big runs. This seems to fit somewhere in the middle. I think at one time, I did 23andMe quite a while ago when it was first coming out. You could just download your raw data. There are some sketchy sites that will tell you things that 23andMe either doesn't want to for reputational reasons. It might not even be real science, but they can try to tell you more about your genome than 23andMe will give you.</p><p>The question here is basically, once someone does one of these runs and let's say they get their hands on the intelligence data and they do it, what is to stop me with my 23andMe, or an embryo that could be my child from me just sending it to them and saying like, "Hey, can you go pick the best one?" It's just something that's going to be really challenging actually in the long term for anyone to stop from happening.</p><p><strong>[01:03:10] Steve: </strong>I think the main difference when you compare these genetic technologies to either in Silico AI or to nuclear weapons is the barrier to entry to do the genomic stuff is actually low other than the difficult part of assembling large data sets, that's hard. Once you have the large data sets, the computational analysis and the IVF stuff, it's kind of low CapEx stuff in comparison to the other two. Now, the part that's slow though about genetics is it proceeds only at a human generational timescale, so you have a lot of time. If you made some mistake and you screwed something up for this generation, you have a lot of time to study them and fix it.</p><p>I think the nightmare scenario is about runaway human genetic engineering or whatever are actually way overblown. There might be a particular family that suffers or even part of a generation of people that suffers, but the human species is not going to destroy itself by genetically engineering itself. I don't think that's a huge risk. More risky is their ability to genetic engineer viruses and things like that. That's a whole different thing.</p><p>I don't think there's going to be any stopping the genetic technologies that we're talking about because even if the United States is taken over by wokesters, and by the way, just as a historical footnote for your listeners, if you want to have some fun, go look up a guy called Lysenko in the Soviet Union. Under Stalin, the study of genetics was actually forbidden and lots of scientists were thrown into the gulag for studying genetics because it violated socialistic tenets of the equivalence of individual men and stuff like this. Anyway, long story, but the point is, in America, we're flirting with ending up in that place, but it's not going to stop in China or Taiwan or Japan.</p><p>The other people are going to eventually get millions of IQ scores or college entrance exam scores link to genomes. Yes, eventually there will be predictors. These predictors can easily run on your phone or on very limited hardware. Eventually, this cat cannot be kept in the bag. Even if it is kept in the bag by some risk-averse woke Americans or Europeans, it's not going to be kept in the bag in Asia. No way. There's no holding that back, but it's going to play out over multi-generational timescales, and so my AI friends, it's like I have different groups of friends. I have the physics friends, I have the AI friends, I have the computational genomic friends.</p><p>It's like when the AI friends look over at the computational genomic friends who are like, "Don't those guys realize everything they're doing is going to be replicated by some AI in 20 years in five seconds." Who cares how smart humans are? Because by that time, machines are going to be a like hundred times smarter. In a way, from that perspective, the issue is like, well, the machines are going to surpass us, and then at some point, the future of the planet will be in the hands of the machinic intelligences, not the eighth brains.</p><p><strong>[01:06:37] Dan: </strong>I wonder too, if you just surveyed the United States, how many people would actually care about optimizing for intelligence? What if instead we just get everyone looks like Brad Pitt and Angelina Jolie or celebrities?</p><p><strong>[01:06:50] Steve: </strong>Oh, totally right. Many more people care about that than care about brain power. The smart kid was never the most popular kid in school. It's not even very easy for people who are more in the middle of the distribution to even understand how the people in the tails drive progress. Because by definition, if you're in the middle of the distribution, you don't know how a large language model works. You don't know how a thermonuclear bottle works. You don't know how your microwave oven works.</p><p>You don't know how your internal combustion engine works, so you don't even know that, "Wow, if I did a really careful accounting, I'd notice predominantly, it's these weird smart kids that we didn't like in high school who are responsible for all of those things that I just listed," which each of which has totally radically changed society, but these people don't know it or care about it. Much more media to them is like, "Is Johnny going to be popular in school?" I just watched the quarterback's documentary on Netflix. Did you see this?</p><p><strong>[01:07:56] Dan: </strong>I didn't, no. I'll have to check it out.</p><p><strong>[01:07:58] Steve: </strong>Anyway, the point is, to them, to people watching that, the <em>Quarterback.</em> NFL quarterback is acme of human development. It's like Kirk Cousins is Superman. You're an MSU guy, right?</p><p><strong>[01:08:13] Dan: </strong>Yes, I was there during Kirk Cousins.</p><p><strong>[01:08:16] Steve: </strong>Well, Kirk Cousins, I don't know how he was in college, but in the documentary, because he plays for the Vikings now, and he's not the top level for NFL, but he's a really good quarterback. If you look at his life, how disciplined the guy is, the guy's like totally fit and ripped, he's doing everything he can to be a better quarterback. Every weekend he engages in a heroic competition, which uses his brain a lot actually as well as his athletic gifts, but the average person can understand that. They can say like, "Wow, if you watch this documentary, you'll come out of it thinking things like Patrick Mahomes and Kirk Cousins, Mariota even. These guys are the ideal.</p><p>Their lives are heroic and exciting and everybody can understand why they're awesome." Whereas like, "Wow, I heard these two geeks sit around for an hour just talking about some math and DNA or something. What the hell is that all about? I wouldn't want my kid doing that?" That's my view of it.</p><p><strong>[01:09:18] Dan: </strong>Maybe it's like the great filter you get just smart enough to get gene editing, but then the preferences of everyone just shoots us towards just completely hedonistic lifestyle and making everyone star athletes and just living for pleasure and fun.</p><p><strong>[01:09:31] Steve: </strong>Oh, yes, it could totally go that way, but the thing is that there are enough people who are smart and want smart kids that they are going to shoot off in some direction. Let's imagine a world where, I don't know how big of a<em> Dune</em> fan you are, but let's suppose we have the-- Do you know what the Butlerian Jihad is?</p><p><strong>[01:09:52] Dan: </strong>Yes, they kill all the robots or they get rid of technology.</p><p><strong>[01:09:57] Steve: </strong>Yes. In the Dune universe, the humans have a close brush with AI taking over, but they defeat the AI's. Then they passed this law on penalty of death, thou shall not create a machine in the image of the human mind, ie, no LLMs. Anyway, they manage to cut off AI research and it's just illegal. Their technology base is very strange. They have some advanced stuff and some very primitive stuff, but that's the <em>Dune</em> world. Imagine that we have the <em>Dune</em> world so we don't have to talk about machinic intelligence, but we continue developing genetic engineering somehow. Then what happens?</p><p>Yes, you might have a lot of people that all they care about is that their kid looks like Kirk Cousins or something, but that sliver of people who are already in the 1% of intelligence, that's a big enough group of people that they can conduct their own effort. Now what will happen is behind the scenes, instead of AIs developing new tech that reshapes the world, it's these genetically engineered super brains that are developing the quantum computers, whatever it is that you're allowed to do in this hypothetical universe that are actually secretly shaping society behind the scenes, but this bulk of people have no idea.</p><p>Just as today, this bulk of people actually really have no idea what are the forces shaping society, what are these solar panels, and does it matter that the cheapest way to generate electricity now is actually from solar panels made in China? Nobody in this bulk of the bell curve understands anything I just said or what the consequences of it are. It'll just be more of that on steroids in this hypothetical scenario.</p><p><strong>[01:11:50] Dan: </strong>Is there a lot of venture money going into this? Because it's weird to me that you don't hear really anything about that these sci-fi potentials. Like I said earlier, you heard about the human genome project in the 2000s and then it's been dead. Honestly, even as someone who just reads the internet all the time, you have to go to your blog or Gwern's blog to actually read scientific papers to get this stuff. It's not really posted all over. There's a couple of PopSci articles, but it's not that popular.</p><p>Is there a lot of venture money going into it? So much could go right here and it seems like the probability is quite high.</p><p><strong>[01:12:28] Steve: </strong>It takes a certain kind of reasoning to follow the path that you just outlined and that reasoning is most prevalent in the EA world, effective altruism world, rationalist world, where they're trying to follow the science, they have open minds, and they are interested in human flourishing and they like IQ points. In that community, there are people really interested in this and there are venture capitalists in that community who want to fund this stuff. Believe it or not, even among the pool of venture guys, there are normies. Like, "Oh let's build the next e-commerce." There are normies, and then there are guys who are like, "I like deep tech edgy investments."</p><p>Those guys who would invest in genomic creation typically and stuff like this. Or they just see the appeal of they know IVF is going to blow up because human fertility is going to become more and more of a challenge as women pursue high power careers and people get older and older before they have kids. They just see that as a business opportunity, but they're not fundamentally interested in this, some people call it like transhumanist future or something like this. It's a whole mix. Not all venture guys are really open to these ideas.</p><p><strong>[01:13:51] Dan: </strong>It surprises me a little bit because it's not really in your face, but it seems like it could be really big.</p><p><strong>[01:13:57] Steve: </strong>Yes. Oh, I think it's one of these things that will be really big. I would predict before too long this 10% babies produced through IVF, which is true only in a few countries now, will become more than norm in developed countries around the world. At that point, you're talking about 10% of all babies born, and then if there's, wow, there's one company that actually can do embryo gene typing in that space, wow, that's a pretty powerful company. It's got a lot of resources. Wow, they're actually assembling the data set for intelligence themselves on the download, and wow, they're the only ones now that can actually predict IQ, and by the way, they can demonstrate.</p><p>There's an asymmetry related to this ML and polygenic traits, which is that I don't have to reveal my predictor to you in order to prove to you that my predictor works. In other words, let's suppose I make the public claim, "Hey, you know what? General prediction, despite the attacks of all you woke morons, we've actually completed our cognitive ability predictor and it correlates 0.6 with actual IQ. It's good enough, we're going to start making superhumans walk off. Oh, by the way, the government of Singapore has invited us to build a 10,000-square-meter research institute and genotyping lab there. If you want to bomb Singapore, go for it, but otherwise, leave us alone.</p><p>Oh, and by the way, people from all East and South Asian countries now are flying to Singapore to have their kids to do their IVFs and that kind of thing," suppose that's the case. Now, you're some normie computational biologist at Harvard funded by NIH or something and you're like, "Steve Hsu, I don't fucking believe you. You guys are lying, I don't think your predictor works."</p><p>I would say the following, I'd say, "Okay, pull a few hundred people from people that you've genotyped and you just happen to know their SAT score. You don't tell me their SAT score but we're going to give it to a third party, Dan Schultz, he's going to hold the true SAT scores of these few hundred people or a thousand people, but you're going to give me their genomes." Now, I just run my algorithm, "Oh, it took me five seconds on my iPhone." I just run it. "Here's the flat file with the list of a thousand predicted cognitive scores." Then Dan will run.</p><p>He'll calculate the correlation between my predicted IQ score and the one that was actually in the biographical data of these thousand individuals, and Dan says, "Yes, it correlates 0.62." I would say, "Hey, Harvard guy, go fuck off. By the way, I'm not telling you what the structure of my predictor is, but it works. Give me another thousand, I'll show you again." We can get to a point where I can prove to the world I can do X without revealing how I do X. At that point, just to repeat myself, I'll just be like, "Go fuck off, guys. See you later. You don't like it, fine.</p><p>Hey, you in particular, when you want to do your IVF cycles, your money's no good here. We're not servicing you. Get the fuck out of here." [laughs] Will we ever see that day? I don't know. Maybe the AIs will take over before any of that shit happen.</p><p><strong>[01:17:38] Dan: </strong>[chuckles] This raises a question, though because if this gets that good, is there a way to run a calculation? For me, I just got married. Is there a way for me to run a cal--</p><p><strong>[01:17:47] Steve: </strong>Congratulations.</p><p><strong>[01:17:48] Dan: </strong>Thank you. My wife, she actually asked if I would ask you about this, but she has one of the variants for CHEK2 which is one of the more rare breast cancer, puts you at elevated risk for breast cancer. The question was, is there a way to run a calculation on her genome and my genome to see what the risk would be and if we could benefit from IVF? Presumably, IVF is a lot to go under, but at some threshold, it could make sense if you had a really high chance of passing it on, so I pose the question to you.</p><p><strong>[01:18:21] Steve: </strong>Let me make two observations here. Let's suppose in your family, there is a rare genetic mutation. Either you or your wife has this rare genetic mutation and let's suppose that even one copy of it is dangerous. You should then go through IVF because we can do the following. We do the following. We take saliva from mom, we take saliva from dad. The region where that CHEK2 variant is, we then know what her surrounding haplotype, the snips around that, what state she has there. What that chunk of DNA looks like for her and what that chunk of DNA looks like for you.</p><p>We can then look into the embryos and say, "These are the embryos that got the CHEK2 variant from your wife and these are the ones that didn't, and you might consider if you're worried about the effects of that, use these, not these." That's just what we call a Mendelian or single gene variant and we can pretty much 100% guarantee you your kid is not going to have it if you go through embryo genotyping. Does that help?</p><p><strong>[01:19:42] Dan: </strong>It does. What would the chances be if we didn't do it, you just do it naturally?</p><p><strong>[01:19:49] Steve: </strong>I don't know about CHEK2 whether it's recessed. Does your wife have two copies or one copy?</p><p><strong>[01:19:54] Dan: </strong>I'd have to ask her. I don't know.</p><p><strong>[01:19:55] Steve: </strong>There's basically simple math that any particular chunk of DNA that the embryo's going to get one chunk from mom and one chunk from dad, but she could get the chunk, if your wife only has one copy, she could get the chunk of DNA that doesn't have that variant from mom-</p><p><strong>[01:20:17] Dan: </strong>Got it.</p><p><strong>[01:20:17] Steve: </strong>50/50 that she gets at least one copy of it from mom, but we can just like tell you which ones are which, and then you don't implant the ones that have the dangerous variant. These Mendelian issues can be solved. That's the easiest thing for us to do. I just want to make one, because you mentioned breast cancer, I think this is as a public service announcement. I always make this point because I think people are unaware of this. Many people are aware of the BRCA mutation. That's mutations on a particular gene or region that predispose women to breast cancer.</p><p>It's pretty rare, depending on your ethnic group only roughly kind one in a thousand or a few per thousand women have this genetic variant. If you do have it, if you have the worst version of it, you could have like a 60%, 70% chance of breast cancer by the time you're 40 or whatever. There's a broad awareness about these BRCA variants, even though in aggregate, the number of women or the number of families that they effect is really quite small. If you're doing polygenic analysis of breast cancer, what you find is that there is a contribution to breast cancer risk from these rare variants, which is only affecting a very small number of people.</p><p>Then the rest of the population, most of the variation in whether you're high risk for breast cancer or low risk for breast cancer comes from about a thousand individual loci, which are each only adding or subtracting from the risk a little bit. It's a typical polygenic story, but most people who are high risk for breast cancer are high risk for the polygenic reason, because so few people actually carry the rare mutations. Even though a rare mutation is very dramatic to talk about and that one little change has a huge impact, that's a, in a sense, negligible part of the population.</p><p>What's really going on is just like there are some tall people and there are some short people, there are some high breast cancer risk people and some low breast cancer risk people. If you define the set of people who have as much risk as BRCA carriers for breast cancer, there are 10 times as many people who just happen to have the polygenic combinations that make them that at risk than there are actual BRCA carriers. Now, we have the technology to compute. From your 23andMe genotype, I can compute your polygenic breast cancer score.</p><p>There are 10 times as many women walking around not knowing that they're high risk for breast cancer as the entire aggregate population of CHEK2, BRCA, blah, blah, blah carriers, but they don't know and they're not told that they're high risk for breast cancer because A, people are just dumb. The medical profession has not updated on the new information coming from this science in polygenic risk. The fact that we can compute this and the fact that the number of affected women defining some threshold is 10 times larger than the number of women walking around with BRCA or CHEK2, that just hasn't penetrated our medical system yet.</p><p>10 years from now, I'm sure people will be getting polygenic scores for colon cancer and breast cancer and all this other stuff, but currently, it hasn't penetrated yet. It hasn't, at the moment, penetrated at all into the general consciousness of doctors. Now, in our embryo practice, we often will see a family, they come in and they're like, "You know what? We have a lot of breast cancer in our family, but wow, we're BRCA-negative. We're not carriers of any rare breast cancer thing, but my aunt had breast cancer and my grandma had breast cancer and I'm really worried, can you make sure that my baby girl has lower risk for breast cancer?" Now, we get the genomes and we're looking.</p><p>We get mom's genome, dad's genome, and the embryos. We look and we say, "Yes, that's right, this family has mom's part of the family, oh, yes, and maybe dad's part of the family has high polygenic risk for breast cancer." They are not carriers of the rare variant. They just happen to be unlucky. Just like this family, oh, a lot of tall people in this family. Oh, in this family, a lot of people who are going to get breast cancer.</p><p>Then we look in the embryos, but the way that mom and dad's genomes recombined mean that even though the family in aggregate is way above average in their polygenic risk for breast cancer, we typically can find embryos that are at normal risk just because of the luck of the draw of the way that you're different. They didn't get the risk variance in this chunk, they got the low-risk chunk from dad and they got the low-risk chunk here. That embryo is fine, but without our technology, there's no way, they're never going to break this chain of inherited family history of breast cancer. Whereas we can break that chain. We just break it right there.</p><p>That's the most interesting stuff for me personally to look in to see like, "Oh, what's the distribution of breast cancer risks of these embryos?" and, "Oh, what were mom and dad like?" We can just see it. It's amazing new technology that didn't exist five years ago, but we can do stuff like that.</p><p><strong>[01:26:07] Dan: </strong>Presumably you all have some of the best models for this stuff, have you considered a consumer website similar to 23andMe, where instead of screening for kids, you just let people off the street run that their polygenic scores?</p><p><strong>[01:26:20] Steve: </strong>We want to do that. Every now and then, some other entrepreneurs come to us and say like, "Hey, we have the insight that you just had," which is that yes, somebody should be pushing this as an adult polygenic score product out in the same way that 23andMe or Ancestry do their thing. We've really just been heads down on the embryo stuff. Our company has not tried to do this, but we have the back end for calculating everything. The embryo report that we generate could easily be an adult report.</p><p>We could start this company. In fact, actually, at this moment, it just happens we just had another iteration of another set of entrepreneurs coming to us saying, "Hey, we want to pursue this adult polygenic score market," and so we're kind of talking to them, but so far, nobody has succeeded in doing this just because it's complicated and people are generally concerned about their health, but how much do they really care? Whereas you're very focused when you're going through IVF, it's like you decided, "I really want to have kids. We're having a fertility problem. I'm going to invest tens of thousands of dollars," and, "Oh, wait, I can genotype my embryos. Wait, let me figure out what that means."</p><p>It really focuses the mind and it's qualitatively different from like you go see your GP at the hospital, and he's like, "Yes, Dan, we checked your blood lipids you're a little high. Maybe, do you eat a lot of red meat?"</p><p><strong>[01:27:56] Dan: </strong>Lots, yes.</p><p><strong>[01:27:57] Steve: </strong>Yes, and then you have that conversation with your doc. Now, imagine that dude explain to that dude polygenic scores, pleiotropy, machine learning, blah, blah, blah, absolute risk versus relative risk, odds ratios. Just imagine explaining all that to that guy who has to see like one patient every six minutes or whatever the hell the HMO is doing to this guy. It is at least somewhat challenging, I think, to contemplate getting. It'll happen eventually, but for any particular entrepreneur to try to want to do it, I would say it is a difficult lift.</p><p><strong>[01:28:36] Dan: </strong>I've actually always been just surprised by the general population, how many people just don't seem to think 23andMe is that interesting. I find it super insightful to learn your high-risk and stuff. A lot of people just don't care. It doesn't seem to be a big deal.</p><p><strong>[01:28:49] Steve: </strong>Yes, exactly, people don't care. Also, "Wow, you're an engineering graduate from Michigan State with a background in tech. Exactly how representative are you of the general population?"</p><p><strong>[01:29:03] Dan: </strong>You have some really interesting personal stories and you know a lot of, for at least a couple of degrees, some famous people that I think are interesting stories you post on your blog. The first is Jeff Bezos. You work with a lot of physicists who used to be students with him at Princeton, is that right?</p><p><strong>[01:29:20] Steve: </strong>Jeff Bezos went to Princeton to study physics. This is actually in many, many interviews he's given, he discusses this. When he left high school, his goal was to become a theoretical physicist. He went to Princeton, which has one of the top undergraduate programs in physics. He did well as a physics major until he got to quantum mechanics. At that point, he himself says he had a lot of trouble with the more abstract formulation of quantum mechanics and he realized that out of his class of roughly 30 physics majors or whatever the number was, physics major, it would have probably been 20 to 30 or something, taking quantum mechanics with him, there were a few, a couple that just got the material naturally.</p><p>He felt they were just much better than him at it and he just felt he didn't have a good future as a physicist, and so he switched to computer science. That's all on the record. My best friend from high school went to Princeton. Started out as a physics major, switched to math. I'll dox him a little bit. He graduated number one in his class, Bezos' class from Princeton. I think class of ''86. My buddy was the valedictorian of the class of '86 and was pretty good friends with Bezos, they were both in the same eating club. I think they were both in cloister.</p><p>I heard a lot of stories from him, and then there's a whole another group of physics guys who, for some reason, it was fashionable or whatever, for whatever reason, a lot of Princeton undergrads would go to Berkeley for their PhD in physics, and I also went to graduate school at Berkeley. I'm class of '86, so my high school friend was a year ahead of me, but I graduated in three years from Caltech, so we ended up graduating college the same year. In my cohort at Berkeley and graduate school, including my roommates that I lived with, were guys who had taken quantum mechanics with Jeff Bezos and knew Jeff Bezos. That's how I heard all these stories about Jeff Bezos.</p><p>Anyway, so yes, I know the stories firsthand. Sometimes people will say things like Bezos wasn't smart enough to be a theoretical physicist, but he was smart enough to be one of the greatest entrepreneurs of all time and business creators of all time. To some extent, to the extent that general intelligence is a thing, I don't think that statement is completely inaccurate because probably all these other physics dudes from Princeton who were contemporaries of his were in some broad intellectual sense as sharp as Bezos, but of course, in life, they were nowhere near as successful as Bezos, so whatever.</p><p>Yes, there are levels to this thing. My wife hates it when I say this, but there are levels to this thing. If I said it the following way, "This guy was a great high school running back, but when he got to Michigan State, he could not make the starting lineup and eventually dropped out and became a rugby player or whatever." people would be like, "I get it. I understand that story, Steve. I'm not personally threatened by that story. Let's tell the story again," but if you tell the story about Bezos, there are a lot of people that don't like this story. As far as I can tell, the story as I have related to you is true.</p><p><strong>[01:32:47] Dan: </strong>Basically, he was in college and he realized that he was just not up to chops with some of the other physicists, but then he goes to Amazon, and he's the smartest guy in the room in every conversation that he has.</p><p><strong>[01:32:59] Steve: </strong>Right, so this part of it I can't personally verify. This is from other people's accounts. Other people will say, and I guess the part of it I can verify is the experiences I've had as a generalist in tech startups, which aren't really physics-focused startups. The statement is because there are levels to this thing, you can have a guy who is not an expert at a particular thing, but the engineers come in and start talking to him and say, "Hey, this is how we decided to solve this problem. We're going to do this, this is our process."</p><p>The other guy who could be Bezos or it could be me as CEO of one of my startups, I can be like, "Okay, I think you guys have done a good job, but did you think about this or did you think about this? The one part of your plan that's not going to work is if you issue on this, that's going to not work." All of these stories are that in many situations at Amazon, Bezos was the smartest guy in the room and was often very useful to have in technical discussions. People have said the same thing about Bill Gates. He started at Harvard as wanting to study math. I'll just say I've experienced it myself.</p><p>If you ask engineers at my companies, "If we get Steven a technical discussion, he might not be as up to speed on the details as us, but he's often very useful to have in a conversation and sometimes says stuff to us that we didn't think of." I think that's plausibly true.</p><p><strong>[01:34:30] Dan: </strong>The one other famous person that you had some interaction with, Richard Feynman. I had the Feynman lectures when I was a freshman at Michigan State.</p><p><strong>[01:34:39] Steve: </strong>Oh, wow.</p><p><strong>[01:34:39] Dan: </strong>I was using those instead of whatever physics books they gave you. My dad passed them down to me. I guess my question here is, he's from an era with some of these legends, Bohr for me, Einstein. You can go watch the <em>Oppenheimer</em> movie and it's filled with little like Easter eggs of all these guys. Honestly, even people who don't know science know a lot of those names and my question is where is that today? Is it gone? Maybe it's still there.</p><p>Maybe I'm just ignorant to it, but it's like the famous scientist's gone because those ideas were easier to find and we got the low-hanging fruit, or is there something different going on where we're just producing less of that massive talent and it's not all working together in the same room? What do you think is happening?</p><p><strong>[01:35:30] Steve: </strong>This is a great question and something even professional physicists talk to each other about. Let me make a couple of observations. Pre-Feynman, the earlier generation would have been Bohr. Oppenheimer was a little bit younger than Bohr, Heisenberg, Dirac, people like that, and then after that would be like Feynman, Schwinger, and some other people. At that time, if you just calculate how many people had the benefit of a university education like what you got at Michigan State, what is the number? It's literally a 10th the number that have that opportunity in the modern Western world. That's not even counting developing countries like China, India, stuff like that.</p><p>We're speaking about a much smaller pool of people from which you could draw top scientific talent. Another way of convincing yourself of that is go look at the raw scores of the army IQ tests of people who enter the US Army. They started using IQ tests for the US Army around World War I, so they have records like 1918, 1925. There's a steady increase in the raw scores that's called the Flynn effect. It's not because people were genetically getting smarter, it's that, "Let's just match it up to what was the average number of years of schooling that a World War I recruit in the US Army had? Oh, six." What was the average? Six.</p><p>Today, that would be child abuse. It's literally the kid starts grade one, leave school, grade six to start working in the farm or the factory or something. There was a big change in just the chances that you got a decent education between 1920s when these giants were around or getting educated and today. We're talking about a much bigger pool of people from which to draw talent. Now, similarly, in sports, if you're like, "How fast did Jesse Owens run in 1936 to win the Berlin Olympic Games? I think he ran a 10.3 in a hundred meters or something, and that was no spikes. He was probably running on the dirt or something, who knows.</p><p>Anyway, in those days, you could have like here's the distribution of the other runners, and then here's Jesse Owens. You could have a wild outlier, who Jesse Owens, by modern standards, yes, maybe he'd be a pretty good sprinter, maybe he'd be not quite world-class, maybe he wouldn't be world-class, but the point is he was an outlier in that pool of athletic talent. He looked like a giant, he looked like a superhuman compared to all these other guys because the other guys suck. Similarly, a guy who by modern standards would be like, "Oh, this guy is a pretty competent. He's world-class, but not considered a super genius theoretical physicist."</p><p>Back in those days, he might have been an outlier. He might have been a guy who looked very different from the bulk of the rest of the distribution. When you're sampling from a smaller pool, it's easier to find outliers that look very different than everybody else, but when you have enough people, the distribution starts to fill in. Then yes, person X is really smart, but person Y is just the smartest person X. He's right next to the guy in the distribution. There's another guy person Z, and actually, in this bin, there are 10 guys. Then after a while, you don't talk about person X is a genius outlier.</p><p>Person X is very smart, but I can identify a hundred other people who are similarly smart, and they're all working on quantum gravity, [laughs] so they don't stand out. There's two distinct issues here; who has the ability to make a contribution, pushing the science forward a bit, who stands out so much that they start to mythologize the guy and talk about how brilliant he was and how off-scale he was and there was nobody else like him. Those are two different things. In the modern era, there are so many smart guys, and our systems are actually not bad at picking them out, that there isn't that much-overlooked talent or a lot less overlooked talent.</p><p>Anyway, the distribution filled in, so just psychologically, it just feels different. Now the second thing you might say is, but how come we're not making these huge leaps of progress? There happen to be huge leaps in special relativity in quantum mechanics in the early part of the 20th century. That has much more to do with factors outside our control. It just happens that the difference in energy scales between the weak scale, which is very well understood in the standard model particle physics. There are about 10 to the 16th orders of magnitude between that scale and the quantum gravity scale, so we don't have any experimental equipment that can probe the Planck scale, the quantum gravity scale.</p><p>The theorist could talk all they want about what's going on here, and maybe string theory is the most beautiful correct theory ever develop, but we were not going to know it. Certainly not in my lifetime or your lifetime. The problem is actually there just happens to be a big gap in fundamental physics between what we've explored and what we need to explore to test the current set of theories that people are working on. In the old days, if you saw the movie, <em>Oppenheimer</em>, there's a character Lawrence. Lawrence was one of the great experimentalist of his day and he built the first cyclotrons at Berkeley.</p><p>You may ask like, "How much money did that cost? How long did that take? How hard was that to built those cyclotrons?" Because as soon as they built the cyclotrons, they could then test a bunch of interesting questions that the theorist were worried about. In those days, you could be like theory is tested within six months, and then theory is reformulated, and it was a very healthy good time for physic or a golden era for physics. Whereas for fundamental physics, particle physics today, we have this problem that nature is not kind. For some reason, nature put the quantum gravity scale way beyond the scale at which even the biggest accelerators on earth can explore. It could've been differently.</p><p>There's some other parallel universe where the Planck scale is very close to the scale of like a one-kilometer radius collider what that can do. In that universe, Steve Chu won the Noble Prize for some experimental prediction that was tested when he was still 27 years old, but what can you do? We don't control how nature is structured. It is what it is. Some generations get lucky and the experiment and the theory are close together. I could write the paper about, "Oh, height is going to be solved pretty soon as soon as we get to here." Then a few years later, "Yes, we solved height." I could do that only because I was lucky to be in it.</p><p>Why? I purposely went there because I knew this was going to happen. I was lucky to be in a place where the underlying technology to explore genomics was advancing very rapidly and the theory could actually play a role in understanding what the experimental results were, but that's different from like, are there still smart guys? Are there still geniuses? Terry Tao is probably in some ways like in terms of his mathematical power way beyond some of these guys. Certainly way beyond Oppenheimer, Bohr, and Heisenberg.</p><p>There's no question. Those guys are not even in the same league. Not even in the same level as like a Terry Tao or something like that. There's just no question about that. Anyway, I don't know if I helped explain it.</p><p><strong>[01:43:43] Dan: </strong>No, sometimes I feel like the answer to that question is fluffy and people would just blame school is not as good anymore. We don't do it like we use to, but that actually makes perfect sense. It's a combo of the field that you're in. They happen to be in a field where you could test your theories on short notice, and then also, you're going to stand out if there's only a couple of thousand of you versus 10, 20, 30, more thousand.</p><p><strong>[01:44:08] Steve: </strong>Yes, absolutely. The other factors that people quote like, oh, they use to have this aristocratic education, and it is true. Both Feynman and [crosstalk].</p><p><strong>[01:44:18] Dan: </strong>Yes.</p><p><strong>[01:44:19] Steve: </strong>They had tutors. They came from wealthy families. They hired people that were equivalent of PhD students to come and tutor their kids when they were still like 15 years old, and so of course, you would learn a hell of a lot more if you didn't have to go <strong>[unintelligible 01:44:34]</strong> high school, but instead, dad brought home some grad students who started tutoring you in physics and computer science. I think that might work better, but we don't do that anymore.</p><p><strong>[01:44:49] Dan: </strong>Yes, there's that Bloom's 2 sigma problem. You get two standard deviation or something of improvement in education if you're single-tutored or whatever. I told <strong>[unintelligible 01:44:59]</strong> that shouldn't really matter because at some point, you're still going go out on your own and just do the physics.</p><p><strong>[01:45:05] Steve: </strong>Exactly. Of course, as a modern who grew up with this stuff, we're a little bit biased because when we look back and we're like, "Wow, what pass for genius work in 1929 is pretty simple stuff." This is stuff undergrads can do now. Of course, I don't mean undergrads know it because it's already in the textbooks, but just in terms of the difficulty of the calculations that those guys had to do, it's not really that impressive by modern standards.</p><p><strong>[01:45:37] Dan: </strong>Should more physicist be jumping fields? You guys do it a lot actually right now. I feel like physics majors are all over the place in other fields, but should more be hopping? Is it too tough going or what's your take on that?</p><p><strong>[01:45:52] Steve: </strong>I think that it's all a question of personal taste. If I look back, I'm an old guy now, so If I look back and I say like, "Wow, what did I do with my life? How did that happen so fast? What did I actually do?" I take great satisfaction in those years of closing the gap between being a young kid learning physics and getting to the frontier to understand everything between the elementary stuff and the frontier. Pushing the frontier at least a little bit forward in my own way, but developing a full mastery of everything in between, and that does take a long time. I don't think anybody can really just do that in a few years.</p><p>Even when you go through the PhD program, you still don't really have a full-- Maybe one area you've close that gap, but across a broad spectrum of physics to really do that. The intellectual achievement takes decades to do. Now, and you ask me, "Steve, if you had just abandoned that a lot early, you could add another zero or two to your net worth." What's the trade-off? Different people are going to react differently to that. I don't know what to say. I feel okay. I feel like my satisfaction with having mastered these concepts in mathematics and physics and biology and computation is very valuable to me internally even if it didn't return more than like a professor salary during that time interval.</p><p>It's all a question of personal judgment. If you know ahead of time that fundamental physics is not probably going to make a lot of progress in the next few decades, but AI is going t do this, I would say like, "Hey, unless you have a very, very strong affection for fundamental physics, it will probably be more exciting for you working on AI." I can't make that decision for somebody else. Somebody else has to like have their own, based on their own preferences, make that decision.</p><p><strong>[01:47:56] Dan: </strong>Some great words of wisdom to end on, Steve. Listen, you've been super generous with your time. I've had just a total blast talking with you. Thank you so much for coming on.</p><p><strong>[01:48:06] Steve: </strong>Yes, this has been a great conversation. Good luck with your podcast.</p><p><strong>[01:48:11] Dan: </strong>Thank you.</p>]]></content:encoded></item><item><title><![CDATA[Bryan Caplan]]></title><description><![CDATA[Listen now (61 min) | Social desirability bias, the respect motive, venerating great names]]></description><link>https://www.danschulz.co/p/2-bryan-caplan</link><guid isPermaLink="false">https://www.danschulz.co/p/2-bryan-caplan</guid><dc:creator><![CDATA[Dan]]></dc:creator><pubDate>Tue, 25 Jul 2023 05:37:09 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/135423800/7e57119a309c69d178dce55029cef613.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<div id="youtube2-uCekDXNVbbs" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;uCekDXNVbbs&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/uCekDXNVbbs?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/the-world-of-yesterday/id1693303954?i=1000622231639&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000622231639.jpg&quot;,&quot;title&quot;:&quot;2 - Bryan Caplan&quot;,&quot;podcastTitle&quot;:&quot;The World of Yesterday&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:3658000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/2-bryan-caplan/id1693303954?i=1000622231639&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2023-07-25T05:37:09Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/the-world-of-yesterday/id1693303954?i=1000622231639" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8a2e4ccb889a8cfeb65accfbc0&quot;,&quot;title&quot;:&quot;2 - Bryan Caplan&quot;,&quot;subtitle&quot;:&quot;Dan Schulz&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/55by7ucmH8IBwQzVJCBA3L&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/55by7ucmH8IBwQzVJCBA3L" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><h3>Timestamps</h3><ul><li><p>05:15 - Social desirability bias</p></li><li><p>15:15 - Left v right</p></li><li><p>22:07 - The respect motive</p></li><li><p>28:40 - Material v rhetorical dominance</p></li><li><p>33:00 - Extremism</p></li><li><p>39:28 - Venerating great names</p></li><li><p>44:04 - Bryan's bets</p></li><li><p>48:07 - Bryan's alternative to democracy</p></li><li><p>58:53 - Where to find Bryan's work</p></li></ul><h3>Links</h3><ul><li><p><a href="https://www.amazon.com/Voters-Mad-Scientists-Political-Irrationality/dp/B0C2SD1K8B">Voters as Mad Scientists</a></p></li><li><p><a href="https://betonit.substack.com/">Bryan&#8217;s Substack</a></p></li><li><p><a href="https://en.wikipedia.org/wiki/Social-desirability_bias">Social desirability bias</a></p></li><li><p>&#8220;<a href="https://betonit.substack.com/p/the_respect_mothtml">The Respect Motive</a>&#8221;</p></li><li><p>&#8220;<a href="https://betonit.substack.com/p/against-veneration">Against Veneration</a>&#8221;</p></li><li><p><a href="https://www.amazon.com/Myth-Left-Right-Verlan-Lewis/dp/0197680623">The Myth of Left and Right</a></p></li></ul><h3>Transcript</h3><p>Coming soon.</p>]]></content:encoded></item><item><title><![CDATA[Robin Hanson]]></title><description><![CDATA[Listen now (56 min) | The sacred, humanity's descendants, social rot]]></description><link>https://www.danschulz.co/p/1-robin-hanson</link><guid isPermaLink="false">https://www.danschulz.co/p/1-robin-hanson</guid><dc:creator><![CDATA[Dan]]></dc:creator><pubDate>Thu, 06 Jul 2023 05:15:25 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/133327654/c6acfd02156f53c612a781a03a906d4f.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<div id="youtube2-NFbCWzul2Cs" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;NFbCWzul2Cs&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/NFbCWzul2Cs?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/1-robin-hanson/id1693303954?i=1000619481055&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000619481055.jpg&quot;,&quot;title&quot;:&quot;1 - Robin Hanson&quot;,&quot;podcastTitle&quot;:&quot;The World of Yesterday&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:3363000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/1-robin-hanson/id1693303954?i=1000619481055&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2023-07-06T05:15:25Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/1-robin-hanson/id1693303954?i=1000619481055" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8a30a7d0bbb1347f30378a6b2e&quot;,&quot;title&quot;:&quot;#1 - Robin Hanson&quot;,&quot;subtitle&quot;:&quot;Dan Schulz&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/6Z7bl9dx64cSnSAgYFZmX3&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/6Z7bl9dx64cSnSAgYFZmX3" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><h2>Timestamps</h2><p>00:31 - Changing careers late in life</p><p>01:29 - Philosophy</p><p>04:33 - AI</p><p>15:20 - The sacred</p><p>29:56 - Humans exploring the universe</p><p>38:02 - Social rot</p><p>46:52 - The Elephant in the Brain</p><h2>Links</h2><ul><li><p><a href="https://ageofem.com/">The Age of Em</a></p></li><li><p><a href="https://www.amazon.com/Elephant-Brain-Hidden-Motives-Everyday/dp/0190495995">The Elephant in the Brain</a></p></li><li><p><a href="https://www.youtube.com/@RobinHanson">Robin&#8217;s YouTube debates</a></p></li><li><p><a href="https://www.overcomingbias.com/p/what-makes-stuff-rothtml">"What Makes Stuff Rot?&#8221;</a></p></li><li><p><a href="https://www.overcomingbias.com/p/will-world-government-rothtml">"Will World Government Rot?"</a></p></li><li><p><a href="https://grabbyaliens.com/">Grabby aliens</a></p></li><li><p><a href="https://en.wikipedia.org/wiki/Construal_level_theory">Construal level theory</a></p></li><li><p><a href="https://en.wikipedia.org/wiki/%C3%89mile_Durkheim">&#201;mile Durkheim</a></p></li></ul><h2>Transcript</h2><p><strong>Dan Schulz:</strong></p><p>I am joined today by Robin Hanson. Robin is a professor of economics at George Mason University and the author of the Age of Em and the Elephant in the Brain. He's blogged at Overcoming Bias since the early days of the web where he is popularized many ideas such as the great filter, grabby aliens, prediction markets for policy, and the human tendency towards hypocrisy.</p><p>His name has even achieved the status of adjective -- ideas that are sufficiently unconventional are said to be Hansonian. Robin, welcome.</p><p><strong>Robin Hanson:</strong></p><p>Glad to be here. I think.</p><h3>[00:31] - Changing careers late in life</h3><p><strong>Dan Schulz:</strong></p><p>Should more people change careers in their thirties, forties, or even later?</p><p><strong>Robin Hanson:</strong></p><p>I don't know. There's this fascinating experiment done sometime in the last decade where they had a website where they said:</p><p>If you have a difficult decision, come here and we'll flip a coin and tell you what to do. And they then could test whether people who made the change they were thinking about making later on were happier than the ones who didn't by randomly controlling which decision they made, and it turned out the ones who made the change were happier.</p><p>So on average people in a situation where they said, "I don't know what to do, should I make this big change?" And some website flipped a coin and they went with that, they were better off. So I guess that suggests that we make not enough big changes. So this career choice could be one of them. But I mean, there's a lot of things that should go into whether you make a career change.</p><p>So I don't know if I can say more generally than that.</p><h3>[01:29] - Philosophy</h3><p><strong>Dan Schulz:</strong></p><p>You've mentioned that philosophy is mainly useful for inoculating you against other philosophies. You've also recently started a podcast with Agnes Callard, who's a philosopher at the University of Chicago. Have your views changed on this since the podcast?</p><p><strong>Robin Hanson:</strong></p><p>I still think it's true that philosophy inoculates you against other philosophy, and I think that's a big important benefit. I guess I've come to, in more detail see philosophers as a discipline that engages a wide range of topics without systems. That is, most other disciplines have collected systems and they use those systems to attack questions and they limit their scope of attention. And they limit themselves to a certain set of systems. And then philosophy covers a much wider range of topics and also just doesn't use these systems. So it's interestingly a sort of in independent source of opinion on these topics that is philosophers are willing to just go into where other people have had systems and opinions and just come up with their own different opinions on those same topics, sort of waving their hands, oh, waving these systems away and saying, yeah who cares about that? And they're just gonna go do it from first principles, and it's somewhat healthy to have that independent check on the other disciplines. I think on average, I would probably typically go with the disciplinary, the systems approach.</p><p>But I've come to appreciate that intellectual competition, basically that is, I think it's healthy in academia, in the world of ideas, if multiple disciplines or compete and try to approach the same topics in different ways and don't coordinate to agree. That is within a discipline what they often do is decide who's on top and who's in charge and what the official answer is, and make sure they have a unified front to the rest of the world so that they can, you know, maintain their respectability and prestige by not disagreeing with each other.</p><p>And that has all the problems of, you know, lack of competition. So I kind of appreciate the philosopher's competition. They sort of go into other people's fields and say, eh, I don't know about that. What about this?</p><p><strong>Dan Schulz:</strong></p><p>Well, in a way, is this not what you do? You seem to take economics and enter all sorts of other fields and apply economic principles.</p><p>So are you a philosopher of economics entering other people's fields?</p><p><strong>Robin Hanson:</strong></p><p>I usually do it with systems, so in some sense I'm applying some systems, but I definitely am not respecting, like staying in my lane. If you think of people should stick with wherever they were trained and the kinds of places they started and leave other people to do the other things, other people initially were doing, I'm not respecting that.</p><h3>[04:33] - AI</h3><p><strong>Dan Schulz:</strong></p><p>Let's talk a little bit about AI. You've had several debates on <a href="https://www.youtube.com/@RobinHanson">your own YouTube channel</a> actually, that everyone should go check out, over the last couple of months with <a href="https://www.scottaaronson.com/">Scott Aronson</a>, <a href="https://katjagrace.com/">Katja Grace</a>, <a href="https://thezvi.substack.com/">Zvi Moshkovitz</a>, and a few others. It seems like it's been a topic you've been going deep into lately.</p><p>Do you mind just briefly summarizing your take on whether or not we should be worried about this?</p><p><strong>Robin Hanson:</strong></p><p>You know, the thing that happened recently was that we had these large language models and they gave the impression that they were near human level intelligence. That is, you could get the impression from reading a few of their responses, "Oh wow, all of a sudden we have human level AI."</p><p>Now they aren't really there yet, but they give you that impression on first reading, and that allowed people to look farther into the future and say, "What will happen when that happens?" So usually we just deal with the world as it is, or the world as it's about to be. That is the near term versions, and there are some people who focus on the long term changes in society and where it might go, and most of us just ignore that. And we not just ignore, we dismiss it, we say, eh, that's crazy. And here we had a moment where people would go, oh, human level AI. That could be a thing soon.</p><p>And then they freaked out because in general people do not like big change. And most of the time we just dismiss these long-term future things even though the long-term futurists are roughly right, that the world's gonna be pretty strange and different in a while. But we set that aside because it's not now.</p><p>And I think if we ever really understood how different the future is gonna be and we put up to a vote, we'd vote no, we don't want all that really big change. Well, the only reason the world changes a lot is because we're focused on these short term things and we don't look at the long run and we don't think about it.</p><p>So this was a rare moment where a large fraction of the population was confronted with actually seeing an image of how big a change it could be, and they freaked out. That is they said no, what now there have been some people specializing in talking about the long-term future of AI and some of those people had been warning about how AI could go wrong and they have arguments and we can discuss those.</p><p>But I think the main thing that happened was all these other people said, "Ah!" and then they had these authorities over there saying, "You should be warning about this." And they were naturally tied together. The ordinary people and the journalists etc., said, "Oh, somebody should be like, scared about this and here are these people scared about it, so let's quote them."</p><p>So that's not, directly a knock on the people who specialize in it, but that's my summary of the overall situation here, is at the moment we have a world going, "Ah!" and because they saw this vision of where it could go in the long run.</p><p>Now this is, quite different than I think the consensus over the last few decades of maybe, tech specialists or whatever, thinking about the future of AI. They've been relatively optimistic and relatively hopeful, and so this was quite a change. And so I was curious what's going on here? And so that's why I did these conversations. I just did like a dozen conversations of an hour or two where I talked to people and said, "What are you worried about, etc."</p><p>And my conclusion from those conversations is that the main thing that's going on here is revulsion or reaction to a very other. That is they're seeing the AI as (I mean, it's not there yet), but the AI, they imagine is very other. It's just very different and that makes the hair stand up on the back of their neck, basically.</p><p>It's just intuitively scary to imagine others who are powerful and who could then contend with us and who might, and, it's just that logical possibility that is basically the argument. People give you the impression there's some technical arguments, there's like some complicated things and you work through the math and you realize no, it's just the idea that there could be these AIs and they could be powerful and they would be other, and they would have other motives, other goals, other allegiances, and they could have a conflict with us. That's it. That's the whole thing.</p><p>And so I, tried to think more about that. And my framing that I would suggest is that this instinctively revulsion of the other is an instinct we have and it roughly makes sense as an evolved instinct. When you're thinking of dealing with actual things around you in the world, that is evolution, natural selection of both genes and culture plausibly would imprint in us the habit of being wary of things that are more different from us compared the things that are more similar to us.</p><p>Because plausibly the things more similar to us share more of our genes and if we ally with them and promote them at the expense of the things that are more different, that will promote our genes. And so that's my, you know, basic explanation for why we have this revulsion of the other. But if you actually think about what natural selection would promote, this is only a heuristic.</p><p>It doesn't get it right a lot of the time. So that's the thing to realize that this heuristic can just go wrong, and in the case of AI, I would say it is going wrong. So my argument would be you are thinking of these AIs, you're imagining them in your head, and you're imagining them standing across the street from you and being big and powerful and hostile, and you're going, "Ugh, I'm at risk."</p><p>And they are in fact, your descendants. They don't exist yet. And when they exist, they will have arisen from us. They will be generated by us, caused by us, and they will be caused through a process that makes them similar to us. In many ways, they are our descendants. They won't be our descendants through DNA, but they will still be our descendants.</p><p>And evolution should make you want to promote your descendants even when they're different from you. That is your children are often more different than you, than your friends and your grandchildren are, and you should expect evolution over time to produce difference and change. But evolution would promote your encouraging and supporting your descendants, and you already expected that you and your descendants might have conflicts.</p><p>And you already expected that your descendants would eventually be more powerful than you. So this is what you've already been expecting about your descendants. What you, what your ancestors had to deal with with you was each generation is largely replaced by following generations who become more powerful than them, who potentially can have conflicts with them and will choose differently. That is, they will have their own priorities and their own goals, and they will not inherit everything about their ancestors. They will reject some of the things about their ancestors and choose other things.</p><p>So that would be my story on AI, is basically what's going on now is this revulsion of the other very deep instinct based on this very vivid image of this eventually eye that's more powerful than us and may have differing priorities and may have conflicts. And if you think of that as a, like an alien coming from another star, invading the solar system, then you have that level of caution and revulsion and fear. If you think of them as your descendants and you frame them that way, then you should, be okay with a substantial degree of difference and even conflict.</p><p><strong>Dan Schulz:</strong></p><p>So if it's really just othering our future descendants, do you think that there's some possibility that people that are concerned with AI risk are more likely to have a deep belief in objective moral truth? And kind of an ironic pair of people that tend to have this belief would be like effective altruists, that's like one of the te core tenets of the movement, but then also highly conservative or right wing ideologies or religious ideologies are also very concerned with moral truth, and both of them seem to generally be a little bit skeptical of drastic change in AI.</p><p>Do you think there's a correlation there?</p><p><strong>Robin Hanson:</strong></p><p>So we've had this long history of science fiction depicting technical change, and usually they have the villains be religious people opposing technical change, and the heroes are more liberal, open-minded folk. And we've had decades of that sort of science fiction leading many people to believe that they would be also therefore, more encompassing and accepting of various Transhuman descendants, including AIs. And so the thing to notice here is when you're looking at this far away in the abstract you tend to believe you would be so, you know, encompassing and generous. When it's closer and you actually feel the threat, all that goes away. You're right back to being like everybody else.</p><p>So I think, in fact, in our society in the current moment like the woke point of view, if you will, is this idea that we have these innate hostility say about race or gender, and that we are culturally overcoming that we're going out of our way to be generous and inclusive because that's our better nature and we feel we can manage that. But there's a limit, and this is past that limit. That is, even if we try to be very generous about our innate racial animosity or suspicion, whatever, we're not gonna try to do that with this. We're gonna go, "Oh no let's draw this line and we must defend this line against that other."</p><h3>[15:20] - The sacred</h3><p><strong>Dan Schulz:</strong></p><p>Maybe that's a good segue into this more recent idea you've had on the sacred. So you did a deep dive to try and understand what the sacred is. Do you mind giving a brief description.</p><p><strong>Robin Hanson:</strong></p><p>I got it into my head that I've been proposing many institutional changes and meeting opposition that's somewhat puzzling, and I thought sometimes what people say is, you are violating the sacred here with your proposals. Your proposals are not treating the sacred as you should. And so I thought, let's figure this thing out. What is this thing, what is the sacred, how does it work? And so I set myself the project of figuring this thing out in the hope that I could then better deal with how it's seems to be in the way of things I wanna do.</p><p>So the first order of business usually in my way of thinking is to collect a bunch of stylized facts, just the sort of puzzles that you're trying to explain, and then search for a theory to make sense of those puzzles. I did a fair bit of reading and tried to collect what people claimed were correlates of the sacred. Whatever the sacred is, here are things that go along with it. And now I have a paper I've published and I had a list of 168 correlates. A whole bunch, I think it was 168. Anyway so I collected those correlates into themes and asked which of these themes could be explained.</p><p>So the themes I summarized are: That the sacred is valuable, that's one. Two, that we show that it's valuable, that is we sacrifice and have strong feelings as ways to show that we value it. That groups of people are often bound together by a shared view of the sacred, that's three. Four that we tend to idealize the sacred, we simplify it and we take away its blemishes in order to, see it as sacred. Five that we set apart the sacred, we try to distinguish it from other things we don't like it to mix together, and we want it to be this separate, distinctive range of things and behavior, that's five. And six, that we have a normous style that we should emote and feel the sacred rather than calculate and reason the sacred, that's six. And seven, that sacred things are often abstract, but concrete things become sacred by association with the sacred, and that's in our important connection with the sacred. So a sacred holiday is a concrete time associated with a flag, a love letter, that is these abstract sacred things become concretely sacred in a striking way. And this is an important feature about this, that somehow we managed to make concrete things like flags or crosses sacred by association.</p><p>So these are the seven correlates or the seven themes of sort of bundles of correlates and the challenges to explain these. Now, a guy named <a href="https://en.wikipedia.org/wiki/%C3%89mile_Durkheim">&#201;mile Durkheim</a> a century ago founded the field of sociology and he had a story about religion where the essence of religion was the sacred and his essence of the sacred was that the sacred binds groups together. So that was number three on my list. And you can see number three, if we postulate it as the core concept, explains one and two as well. That is if we are binding together by seeing the sacred the same, that will make us want to value the sacred highly and show that we do so that we can see that we are bound.</p><p>But the other four are less clearly implied by that theory. The other four being the idealized setting apart feeling, not thinking, and touching makes concrete things sacred. So the question is, what theory could explain those other four themes? And I came up with a simple story that draws on this psychology: a phenomena called <a href="https://en.wikipedia.org/wiki/Construal_level_theory">construal level theory</a>. So I pause to explain that. Construal level theory says that whenever we see or think about anything, we have a range of seeing it near versus far. So if you look visually at a scene, you will see a small number of big things up close. You are seeing in a near mode, and a lot of little things far away you are seeing in a far mode. And the idea is our brains just think about things differently near versus far. And near things we see more concretely and far things we see more abstractly. So when you see, if I'm looking at a tree at the moment, there's a lot of little leaves in my mind. The leaves are described very abstractly. There are little green blobs and there's not much detail known about them in my mind, other than the leaves are little green blobs about a certain shade of green and a certain size of blob, but that's it. Not even a shape to the blob. And that's typical of seeing things in a far mode, it's far away, you see them more abstractly. That is you have a small number of descriptors, each of which is a more abstract, less detailed descriptor. And that's what it is to see things in far mode. And near versus far applies to not just things that are visual thing, but things in time are near versus far in social distance things are near versus far. Hypothetically, something you're confident in is near. Something that's speculative or unlikely is far. In planning, you have high level goals that are far, and you have specific constraints and practical considerations that are near in a wide range of our thinking we have near versus far. And this is a robust literature and psychology showing it maps actually onto our brain structures in the sense that our brains are organized in to a substantial degree in terms of structures that are thinking abstractly versus concretely. Basically when stuff comes into your eye, it goes through concrete layers, which then build up higher, higher, more abstract structures, and then the back layers are thinking in terms of large abstract structures of what you see.</p><p>So, that's near versus far, and the key observation is to note that this habit of seeing things near versus far is an obstacle to a group seeing something the same. So, ff we want to see medicine as sacred, which we do in our society, we want to see it the same and agree on it. But if I am sick and about to undergo surgery and you are not, I might see it in a near mode and you'd see it in a far mode, and then we would see it differently and we wouldn't agree on it.</p><p>And that's an obstacle to our treating this as sacred because the point of treating as sacred is so that we can bind together by seeing it the same. So the hypothesis is when we have something that we want to use to bind us together, by seeing it the same, we change our habit of seeing it so that even when we're close, we see it as if from afar.</p><p>So take the example of sex versus love. Like sex is something that you see different up close versus far away. It's very noticeable if you've ever, you know, paid attention, that your focus of attention and everything else is very different. Up close versus far away. Right, so someone talking about sex from a distance may well disapprove of the sex that someone up close would approve because they just see it different Love, however, is something in our society we see as more sacred and that we see it more the same, and we all agree that love is great, but we see love as if from afar we're less clear if any one situation counts as love. What is this particular relationship, is that love, is that not, we're not very clear on that. We're pretty clear on what sex is; we're not so clear on what love is. So, even if you're in a relationship for a long time, you could still not be sure if it's love but you're sure that love is good.</p><p>So love is an example of something we treat as sacred and we see it from afar and that makes it harder for us to tell for any one case whether the label applies and makes it harder for us to reason it abstractly about it, but it unites us more together in our shared view that love is great.</p><p><strong>Dan Schulz:</strong></p><p>So is the sacred good? It seems like for institutional things like marriage to some degree it's required, right? Do you treat anything as sacred and does it improve your life?</p><p><strong>Robin Hanson:</strong></p><p>I more think about this as something that's not really an option. We have a strong urge and habit of treating something as sacred. Almost all of us do. I do too. So it's not really going to be an option for you to not treat anything as sacred. You could pretend you don't, but then you'd just be looking away from the things you're actually treating as sacred and not reflecting on it. So I would say the question is more, do we have a choice about which things we treat as how sacred? And then what are the basis for making that choice, which things would be better to treat as sacred versus not? And so then I can use this theory to identify the trade offs that are there. So we could say by treating medicine as sacred, we put more energy and effort into medicine. We devote more resources to medicine on the one hand, and we agree that we are bound together and we feel more bound to the people that we share this view about medicine with. That's positives.</p><p>Negatives are, well, whoever we're binding together against, we are distancing as ourselves from those other people who don't value medicine so highly. We are feeling more distant from them and hostile and suspicious of them, and medicine itself, we are not very good at thinking about reasoning about calculating, so we don't do a very good job of distinguishing effective from ineffective medicines or making better institutions for medicine, or even calculating the value of medicine. Those are all things that treating it as sacred prevent or makes harder. So there's the trade off about medicine.</p><p>So for example, we actually do way too much medicine, and so this benefit of putting more energy into it is not really a benefit, at least on the margin where we're just doing way too much and we're doing a really bad job of creating institutions and incentives for the medical institutions to give us good medicine and not bad. So we're paying some pretty high costs for treating medicine as sacred, but that gives you a sense of how we'd want look at all those same costs and benefits about each other thing we might treat as sacred and ask, will that go better there?</p><p>So for example, I think I notice that I treat sort of intellectual inquiry and honest pursuit of that as somewhat sacred. That is if you ask me to lie and offer to pay me a modest amount to do so, I am somewhat horrified that I would consider such a thing. I don't want to be greatly influenced by financial payment or status or something in terms of which answers I choose on a intellectual inquiry or even which topics I pursue.</p><p>Those are all signs that I set it apart. That is, I make a strict line between sort of high abstract, good thinking and practical reasoning about things that I would be willing to compromise more on. I idealize it in many ways, and so I can see I'm treating it as sacred. And then I can use this theory to realize that I'm then gonna do a bad job of making these tradeoffs. Sometimes I should take the money and lie. I mean, actually in the abstract, yeah, it would be practical and helpful, but this thing is an obstacle to that. And I should be more flexible mixing other kinds of reasoning and this sort of abstract reasoning altogether in a big mixed up bunch where I do some things a little some way or the other way, and the treating as sacred gets in the way of that. I'm making this strict separation and it's gonna get in the way of a lot of sort of abstract reasoning about which variations on this are a good idea and when we should do them. So again, we can see the trade offs here. One way to think about it would be to ask, well, which things, if we treated them more as sacred, would be the least distorted by it? Which things more naturally have the features that we are attributing to the sacred and we are forcing on other things that we treat as sacred, which things are in some sense naturally more sacred?</p><p>And one thing that comes to mind there is math. Math is idealized in a very literal sense. Math is set apart in a very strict, definite sense. And these distortions are less effective about math because as long as we stick to the idea of a mathematical proof and insisting that people provide proofs for math claims, these other sorts of biases we might have about it are kept in check. Surprisingly, and actually our educational institutions do treat math as substantially sacred. That is it's pretty obvious that math is useful, but it's not as useful proportionate to the amount of effort we put in teaching people math and the amount of effort people put into using math in various academic disciplines. I mean, it does seem like we're going overboard with respect to math. But plausibly it's because it has these advantages as the sort of thing you treat as sacred, and that it can stand it, it can hold up to that sort of pressure and not buckle and the way medicine buckles.</p><h3>[29:56] - Humans exploring the universe</h3><p><strong>Dan Schulz:</strong></p><p>It seems like we're at some point according to your ideas, we'll come to a path where two roads diverge. We can either stay a quiet society or we can evolve to become a grabby society.</p><p>So for humans, like what size of organization do you think actually needs to inherit these grabby values for us to become grabby? Will, we need like a world government that says everyone is on board? Could we have just one country that says we're gonna go become grabby and you guys all stay here.</p><p>Could just a startup do it like some kids in a garage who needs to adopt these values for us to achieve grabs?</p><p><strong>Robin Hanson:</strong></p><p>So to be clear, we're talking about whether our descendants expand out in the universe. And the key idea I think is that in a large civilization it just takes any one small part that is inclined to and has the tech resources to become grabby, for the descendants to be grabby. So the opponents of grabbiness, those who would prefer that not to happen, are at a disadvantage. They have to coordinate to prevent anyone from doing that. Now, that's a tall order, but in the last half century our world has in fact been coordinating more at a global level. That is, say a century ago, the world was roughly described as a set of countries who were competing, and within each country there were elites who were mainly oriented around that country and promoting that country's interests or competing within that country. And since then, instead the world switched to a world where our elites within each country are largely mixed in with and identify with the elites of other countries, there's more of a world elite class, and this world elite class mainly cares about their reputation and stance with that world elite community. And that's created an enormous convergence of policy around the world.</p><p>As we saw in Covid, at the very beginning of the pandemic, there were all the usual experts who had their usual stances on masks and travel restrictions, whatever, and then elites around the world all talked in a month about what to do, and they came to a different opinion about masks were good, and travel restrictions were good, and lockdowns were good. And then the whole world did it that way. The whole world did what these new elites said pretty much everywhere and that's because this community, you know, the elites in each community wanted to be in good standing with the elites everywhere else in the world. And they basically went along with this elite consensus worldwide.</p><p>And we see that same level of convergence with lots of other kinds of regulation like nuclear energy, genetic engineering, airline safety, lots of other things. Basically the whole world does it pretty similar. And this sort of worldwide regulation is, you know, they're often to prevent deviations from the world cause. So for example, the only country in the world that allows any sort of organ sales is Iran. And the world of bioethicists is still irate about that and trying to make sure somehow they, we can make those Iranians change their mind. So we are in this world where people are convergent on regulation worldwide, and they're wary of any one place in the world allowing changes that would then disrupt the rest of the world.</p><p>And that's what you're seeing perhaps in the AI space here, as we talked about earlier. That you might have argued, well, no one, unless you've got a worldwide agreement on restricting AI, it's just not gonna work. And you might say, well, we already restrict many things worldwide without official worldwide agreements because of this elite community. And so people are trying to persuade this elite community worldwide to restrict AI, and they think they have a shot at that. And in some sense, that's based on this perception that if anybody allows this, it'll disrupt the whole world. So as we move into the future, any way in which competition might produce evolution that would create disruption worldwide might be limited, including AI or genetic engineering, or nuclear energy.</p><p>These are all things that, if they had been pursued substantially at any local place, would've created competitive pressures to disrupt the whole world, and they were shut down and stopped so far. And once the possibility of interstellar colonization is open, everybody will know that allowing anyone to leave will be the end of this era of global coordination. That is once some colonists leave out there and then they can go and colonize something, and then new colonists go from there. There, that colonization wave is out of control. It will evolve and change in ways that the center won't even be aware of for a long time, certainly can't limited control, and that those colonists would then have waves of descendants who come back here and contest with us, and we would be largely at their mercy. That's the consequence of letting anyone leave.</p><p><strong>Dan Schulz:</strong></p><p>So it sounds like we're on a trend right now of, you take nuclear power, you take genetic engineering, and what seems like potentially AI as well. You have the world government saying or not, it's not the world government.</p><p>It sounds like in your concept, it's sort of a more abstract idea of a world elite community, who wants to impress each other and show each other that they're in the club setting these rules. And so under this framework it may not be that the whole world needs to agree, but it's gonna be challenging for smaller localities to cause us to become grabby.</p><p><strong>Robin Hanson:</strong></p><p>So initially, like the cost of doing an interstellar colony will be very large, so you'll only need to restrict organizations who could possibly afford it, right? As time goes on, that cost will fall, and so you'll have to be more strict about how many more organizations you limit. But that's been true, say of genetic engineering or disease engineering or nuclear engineering. Even in the last half century as the cost of these things fall, we have to be more intrusive with our regulation and surveillance in order to prevent all the people who could do it from doing it. But we may just grow into that. So we may, as the cost of things fall, we may just actually have more surveillance and more intrusive regulation that does cover more and more people exactly with the rationale that if we don't, then it'll get out of hand.</p><p><strong>Dan Schulz:</strong></p><p>This actually brings up a question, so you, you're like really unusually comfortable with descendants being very different from us. And your book The Age of Em sort of talks about what most people would not consider humans at all, but in your view they are descendants of humans and therefore we can be comfortable with a deeply weird world.</p><p>Do you worry at all actually that AI or some of these new technologies could create sort of the opposite risk where we are stuck in the 2025 or 2030 timeline indefinitely for hundreds or thousands of years, and sort of this moral evolution actually does not change. We get locked in. Is that a concern that you've considered?</p><p><strong>Robin Hanson:</strong></p><p>Honestly, those are the two options. There isn't really much in between either we allow evolution and competition to continue, and then our descendants become as different from us as we are from our ancestors, which is really different, or we don't allow that sort of change, in which case we lock in the current preferences and styles for a long time.</p><p>Those are the two options. There really isn't another one, and neither one looks very pretty. I don't see that much in the middle though.</p><h3>[38:02] - Social rot</h3><p><strong>Dan Schulz:</strong></p><p>It does seem like though if you were pro grabby, you were pro us exploring beyond the earth. It seems like that's a sort of a trap door decision, right? Once you go you're not going back. So if we do have lock-in for the next even a hundred or so years, if it only takes one sort of uprising or event or change for us to go and become grabby, then presumably the chances of us becoming grabby could actually be quite large and in the lock-in scenario may not be such a big deal.</p><p><strong>Robin Hanson:</strong></p><p>So I think about this in terms of social rot. That is, all civilizations in the past have risen and then fallen again, suggesting that ours might do so too. But now we have a global civilization, so it might rise as it's doing now, but then fall again later. And I think this is, this is just an empirical observation, but if we think about what we know about software and the way it rots and the way firms rot, and the way other sorts of adaptive systems rot. We should see that it's a pretty strong tendency. The only solution to rot has always been some larger field competition where rotting things rise and then fall, but new things rise to replace them. But that larger world of competition would produce this long run change and the grabby expansion. So the attempt to prevent that grabby expansion in the long run change would be typically in terms of some system you create that's a global system that is not an arena of competition primarily, but it's a system that's limiting that competition. But that system itself would then rot and now you face a big problem for the long run: You don't like change, big change, you've created this system to prevent big change, but this system rots. So the big question is how slowly or fast does it rot, and can you make it last indefinitely or is there some time limit after which it would rot so far that it just couldn't manage to sustain this prevention of the change? That's a fundamental question about social rot that I hope to start thinking more about soon. But it seems to me like a really important question.</p><p>I mean, even without this thing, we are in a civilization that's plausibly rotting. And that raises the question of how this will play out. Will there be a fall, and how bad will it be, and what will be the disruption in the transition to the next rise?</p><p>So one of the most dramatic parameters on which our world seems to be rotting is fertility. That is, fertility has been falling quite consistently with wealth, and it's below replacement in half the world now, and not obviously stopping its fall. Fertility is still falling in most of the world, even the ones below replacement. And that looks like a kind of rot in the sense of, you know, long-term civilization growth. Now, many people say, "Ah, but evolution surely will create a subset of humans with higher fertility that overcomes this." But the question is, when and how?</p><p>So I might say that the key problem, so we've had some subcultures in our world so far that have had unusually high fertility, say Orthodox Jews or Mormons or something. But if you look at the fertility of those subcultures, it's falling at the same rate, it's just delayed compared to the rest. So I would summarize that as saying they are not sufficiently insular. They're still under the influence of the larger culture and behavior in those subcultures is still sufficiently influenced by the larger culture that they are not succeeding and creating a self-sustaining higher fertility subculture.</p><p><strong>Dan Schulz:</strong></p><p>To increase fertility rates, it seems like the notion of the sacred obviously helps if certain religions have been somewhat like the highest fertility numbers like Mormons and to your point Orthodox Jews is it like a little bit ironic almost that we require the sacred for an evolutionary process which should be a little bit more Darwinian and require less of this like human-centric idea.</p><p><strong>Robin Hanson:</strong></p><p>Sacred presumably evolved from a Darwinian process. It's part of human culture revolution that allows cultures to reproduce and succeed. So it's not anti Darwinian at all. In people's concept of the sacred, they think of it as something that rises above Darwinian selection. That is, that's part of the norm of the sacred is that you are to assume that sacredly driven behaviors are not constrained by, or subservient to, selfish genetic incentives. But in fact the sacred arrived from Darwinian cultural evolution, and that's plausibly where it came from.</p><p>I mean, orthodox religious people are not more sacred than other people. They just have different things treated as sacred. So it's more about how much a subculture can maintain a different concept of the sacred compared to a larger culture. And, you know, it's possible for a subculture to value fertility highly enough that if it were isolated from the rest of the world it would, even in the face of other pressures that would promote lower fertility, it could manage to support higher fertility. The problem is the world as a whole is in a culture that supports elements that are discouraging fertility and that these subcultures are not insulated enough from that. They still watch TV and watch movies and hear the news and go to school with other people, etc., and work with other people enough that that larger culture's lower value on fertility passes on to enough of their children that they are not actually making a large growing subpopulation.</p><p>So this is an example of rot, you see. The key point is when systems rot what it takes is a different system that's insulated enough from the original system to grow again. It's like in the past when empires fell, empire might be composed of five, provinces, all of which rotted together. No one of the provinces could resist the rot of the rest of the empire because they were so interconnected with us, the Empire, it had to be a whole nother empire that was pretty disconnected for the first one that would then rise and replace the first Empire because it was not being very influenced by all the rotting features of the old civilization. So then in our world, the problem is then if our civilization rots as a unit, then it'd have to be a pretty insular part somewhere. That is the thing that could grow and resist the previous thing, so the question is how much?</p><p>So for example, again, the Mormons and the Orthodox Jews seem like not sufficiently insular. Now you could say, how about China? China is somewhat different than the rest of the world, and China has somewhat different institutions that arose more recently, and so maybe China would rise and the rest of the world will fall and it'll work that way.</p><p>My guess is China's more integrated into the rest of world economy than will work for that scenario, but I would be happy if that were true. My Age of Em scenario is a scenario of sufficient difference in the sense that if the world of brain emulations is left alone and allowed to form its own rules and regulations, etc., without too much constraint by the rest of the world, then it could form this whole new fresh civilization that would then grow. But as we're seeing with AI, people may not be very willing to allow this new sphere to grow and have its own rules. But that's something of what happened in, say, computers in the last half century or so. Basically, we had a lot of other industries that were relatively heavily regulated, but with computers we said, oh, who cares about that? And then it was just largely unregulated, and then it's been the fountain of growth for the half century. And now we're finally getting around going. Why did we, why? Why aren't we regulating that? It looks like it has a lot of influence. Shouldn't we be regulating that? We're going, yeah, that seems like we should be regulating that. And so we may be about to clamp down much more on computer-based innovation because we're realizing we've been treating it differently from other industries and maybe we don't want them.</p><h3>[46:52] - The Elephant in the Brain</h3><p><strong>Dan Schulz:</strong></p><p>Let's shift over lastly here to your book the Elephant in the Brain. Do you wanna give just a real quick overview of the main point the book tries to get across?</p><p><strong>Robin Hanson:</strong></p><p>We are all doing a lot of things in our lives and for most of those things we tell ourselves a story about why we're doing them. And we social scientists, when we come in to study areas of life, we listen to those usual stories and we usually assume they're right. And we go on with modeling areas based on those claimed motives. So, for example, people say they go to school to learn the materials so they can get better jobs and be more productive on their jobs. And we go, well, yeah, that makes sense and then we try to make models and analyses of school based on that sort of assumption. Or people say they go to the doctor to get well cuz they're sick or they vote in order to make better policy for the nation.</p><p>These are things we all tell ourselves about why we do things and for the most part, social scientists have just accepted our usual explanations. And then in all these areas there's a lot of puzzles, a lot of weird things going on. Like in medicine, it turns out there's no correlation between health and medicine. People who get more medicine are not on average healthier, strangely. And we scratch our heads about these puzzles and we come up with epicycles to try to make sense of them.</p><p>And I realized at some point that you could hypothesize that we're just wrong about our motives. So the first place I did that was in medicine as a postdoc long ago, and I said, "What if we're just wrong about our motives for medicine? What if it's just for a different reason than we say or in medicine?" I said, "What if medicine is about showing we care and letting other people show they care about you?" If that was your motive, it might make a lot more sense about these details. It would make sense of the fact that medicines isn't on average helpful, but you're still able to show you care. And a number of other features of medicine in terms of the how it's a luxury good in terms of how spending goes up with wealth, or how we don't seem very interested in lots of other things that affect health much more than medicine. And you could say, well, this is what's going on, we're not aware of it consciously, we are thinking in terms of other motives, but this is the real motive, and this motive then makes more sense of our behavior there. And so The Elephant in the Brain book is a book saying this is generally true of a lot of things in our life. There's a lot of ways in which we're wrong about our motives, and so the first third of the book sort of sets up the argument why we might be wrong and why it might make sense to be wrong. And the last two thirds goes over 10 areas of life saying for each one: Here's the motive you think you have, here are some puzzles that'll make sense with that. And here's a motive that if that was your real motive could make more sense of these puzzles.</p><p>And so we suggest that this is in fact your motive. And motive, what we mean is the larger force that structured your behavior and the institutions altogether over time, the function that it's serving, not what's in your head and what you're thinking about when you're choosing. That's the elephant in the rain is the fact, the idea that we have all these hidden motives and illustrated by 10 different areas where we say in this area, you think you're doing it for this reason, but this is more probably why you're doing it.</p><p><strong>Dan Schulz:</strong></p><p>So to what extent do you think this is like advantageous for us, and maybe for a thought experiment, let's say that we found a way to genetically engineer future people to be much more aware of their hidden motives and their tendency for signaling, and we could dial it up. If we say, today, 80, 90% of all behaviors motivated by signaling, we could dial it up all the way to 100, or we could drop it down to zero.</p><p>What do you think would be optimal for our society?</p><p><strong>Robin Hanson:</strong></p><p>The first question to ask is, given the way the world is, what's optimal for you? That is in an equilibrium sense. So the story here has to be that at least until recently, this was the equilibrium because this is what it was optimal for you to do. That is it was optimal for your ancestors in the world they lived in to have these false beliefs about their motives, and to act the way they did not understanding why they're doing things. That was actually more effective for them at achieving their evolutionary goals for reproducing and succeeding and getting respect and things like that.</p><p>Then there's three variations we could imagine. We could say, well, the world's changed and maybe it's no longer optimal for individuals to be so ignorant. And in which case we're out of equilibrium and then we all need to learn to behave in a new way, in which case we might need to learn to be more aware of these things and then we'll be in an equilibrium because the world's changed and the old equilibrium doesn't apply. Doesn't seem very believable to me. Seems to me, this is probably still equilibrium.</p><p>Or we could say, look, this is the equilibrium for most people, most of the time. But you don't have to be most people most of the time, it might not always be the best thing to do. So we could ask, "For whom might this be an exception?" And I think the strongest case might be, well, if you're a social scientist whose job is to understand what's going on in the world and if you're wrong about what everybody's doing, you're just wrong about your job, which is to find out what people are doing. So as a policymaker or social scientist, you should just try to look at what's actually happening, even if that's not very natural for you as an individual.</p><p>But it might also be true that some specialists, like managers or salespeople, it's especially important for them to be able to understand what's going on in the world. They need to consciously think about their marketing and management plans, and for them it'll be worth the sort of personal awkwardness of not smoothly interacting with other people in the way they otherwise would in order to consciously know what actually is happening because they need to know that for their job. So you might need that. You might be you somebody who exceptionally needs to have a better conscious understanding of what you're doing. Relatedly might be nerds like me, that is most people glide through the social world smoothly, intuitively without knowing why they do things and they don't realize the words they say don't really match their actions, but they're smooth enough not to notice that and everything goes fine. For nerds your intuitive machine that tells you what to do doesn't work so well. And when you just do the intuitive thing, it doesn't tend to impress everybody and make them all feel comfortable. And so maybe for you, you need to more consciously think through what you're doing in order to work smoothly in the world. And for that, you may need to confront your motives to understand it better, to consciously reason about it.</p><p>So if we're in a equilibrium where this is still optimal for most people, most of the time you might ask, will we ever be in another equilibrium? And in the long run, plausibly, as I said, long run change could be pretty large and so I could very much believe that in the long run this will change. I don't have any particular reason to think it'll happen soon or fast. But I have made the long run guess that in the long run, our descendants will know that what they want primarily is to reproduce. So you don't know that. That is, evolution produced you primarily to reproduce, and that was evolution's goal in setting up your mental structures and your intuitions, and your feelings, etc. But it didn't choose to just give you this abstract concept of, "I want reproduce and have you plan all your activity out based on that abstract concept." That's not how it worked. Because initially your mind really wasn't capable of supporting that abstract concept or applying it to very many things, and so that just wasn't an option. So long before that was an option evolution gave you this complex mixture of all these different heuristics and feelings and habits that on average, in ancestral environments, produced reproduction, but apparently recently is failing, as we talked about for the fertility fall.</p><p>Somehow those heuristics are just going wrong lately. But, evolution's kind of stuck, it didn't really know how to tell you abstractly how to deal with new circumstances by having an abstract goal, it just had all these concrete habits and feelings. So in the long run though, it seems more robust to just have creatures who know in the abstract, I wanna reproduce in the long run. Let me calculate my great-grandchildren and figure out what life strategies will do that I just know that that's what I want and I calculate that, and that's just straightforward. That seems like a more robust way to in a changing world and changing environments achieve the end of having more descendants. So that's a way in which they would be, say, less misled about their motives but they could still be misled about other parts of their thoughts.</p><p><strong>Dan Schulz:</strong></p><p>Robin, this has been an absolute pleasure. Thank you for coming on the show today.</p><p><strong>Robin Hanson:</strong></p><p>Nice to talk to you, Dan.</p>]]></content:encoded></item></channel></rss>