Vitalik Buterin

Libertarianism, vision for Ethereum, and AI
Transcript

No transcript...

Timestamps

(0:00:00) Intro

(0:00:21) Governance mechanisms

(0:02:32) Talent in crypto

(0:10:02) Talent clusters

(0:15:23) Crypto conferences

(0:19:07) Vitalik’s vision for ETH

(0:28:51) Libertarian canon

(0:47:25) ETH cultural or technological innovation?

(0:53:19) Learning languages in the age of AI

(0:57:05) AI and d/acc

(1:06:43) P(doom)

(1:12:17) Humanity’s descendants

(1:17:13) If ETH succeeds, why?

Links

- Vitalik’s blog

- Follow Vitalik on X

- Vitalik’s techno optimism

Transcript

[00:00:13] Dan: All right. Today I have the pleasure of speaking with Vitalik Buterin. Vitalik, welcome.

[00:00:18] Vitalik Buterin: Thank you so much, Dan. It's good to be here.

[00:00:22] Dan: First question. If you were put in charge of improving a large governing body, so say like the EU, how do you think about the relative importance of fixing the governance mechanisms and laws versus the talent of the people making the decisions? Said another way, if you could only choose one to start, would you change the incentives to attract highly talented people to enter the government, and then leave them at the mercy of today's bureaucratic rules, or would you re-architect the way that the government actually works and leave the existing people in charge?

[00:00:49] Vitalik: Wow, that's a tough question. It's definitely in that category of questions I haven't really thought about directly. I feel like I lean more on the people side. I think 10 years ago, I probably would have leaned more on institutions and incentive side. I think those things are important, but I think the other side of that is, basically, just that there have been so many attempts to try to create much better incentives. So much of the time it just ends up being either falling completely flat or just being okay and just being less impressive than people think.

I definitely feel more pessimistic about these improve everything by improving the incentives direction than I did, say, five years ago, even though I do still think that incentives are quite important, which I'm trying to think if I have a more detailed answer, because the EU was one of those in particular that I have less experience in understanding the deep internals of than other countries, just because I've spent relatively less of my time there. It's just the sort of thing where you don't absorb as much information about it instead of just by hanging out on the internet by default. It's just that it's harder for me to give amazing topic-specific answers about.

[00:02:32] Dan: Got it. Got it. Okay. I have a couple of questions on talent. One of the things that's always made me really bullish on crypto is the perceived level of talent in the field. It's got this hugely ambitious vision, it attracts a lot of really, really super ambitious people. Now that AI has a lot of mind share among the same set of people that might be thinking about what to do with their careers, do you think the aggregate talent level entering crypto is higher or lower than it was three years ago?

[00:02:59] Vitalik: That's a good question. I think that it's hard to give an estimate because there's also just a dig to the founder, which is the crypto space just keeps getting more and more mainstream with every passing gear. There's this unavoidable regression to the need effect. If you go all the way back 10 years ago, for example, the kinds of people that crypto were attracted were the kinds of people that were really in it for the ideals.

Often people that were quite still programmers are the sorts of people that would totally be able to earn a million dollars a year in some regular job. The problem is that they could totally not be motivated to actually stick to that regular job. They need something that they're passionate about. These are the people who are passionate about open-source software, freedom, empowerment. All kinds of technologies that are at the core crypto grid.

It reminds me of how there’s a lot of these developers like, Amir Taaki was one, he created libbitcoin Bitcoin which was the strat, and with one of the earliest alternative Bitcoin implementations. He ended up teaching me quite a bit at the time. He was the sort of person who at the time called himself an anarchist now he’s moved to democratic federalism. And he spent a lot of time in the names, I don't know, the Kurdish areas of the Middle East and was really impressed by studying that philosophy and the values there, and those sort of things that really drive him.

That's the sort of thing that I think crypto is still has, but it's relatively been diluted because the salary potential also exists. There's plenty of people who would rather earn 200k working on something that's actually meaningful in the space than earning two million just working on some other random casino that's just the same as all the other casinos. Yes, like solar, it still exists, but that demotion effect is still there.

I don't feel like I have seen too much impact from AI yet. If I had to guess what impacts it would have, I would guess there would be a sorting effect. The sorting effect that I have in mind is like the sorting between people who care about things like, I don't know, freedom and openness and privacy and our global networks, specifically, versus people who just care about generic cool stuff.

I think AI attracts a lot of the second, and AI relatively speaking attracts less of the first. The thing with crypto about five years ago is it attracts them both. When you have a space that attracts both, it can be hard to separate one from the other. It's when you're working with a person, it can be really important still to actually figure out which one of those it is, because even if it doesn't matter today, it matters in terms of what's going to motivate them five years from now, or it matters what's going to motivate them if, let's say, the political climate really changes the a lot or if the yes, incentives should shift in some way. That would be the thing that I predict.

Basically, this kind of divergence between this crypto values that are specific to crypto and the kind of people to find millennial cool things attractive in a more generic sense. I feel like there is a vary in high quality people in both camps.

[00:07:01] Dan: Got it. Maybe talking specifically about what makes someone good in crypto. I heard you talk before about Danny Ryan who emerged as what you called a decentralized PM when Ethereum was moving to proof of stake. I think what you mentioned is he doesn't have any formal authority to actually order anyone around. Mostly what he is doing is talking to different client teams that are building the software with an ecosystem. What traits do you think makes someone a good fit for this sort of role over a more traditional PM at a tech startup?

[00:07:32] Vitalik: It's a good question. I definitely feel like I understand the Danny Ryan role much better than I understand the traditional startup PM role, just because I ended up basically jumping from university straight into the crazy crypto world. I think if I had to give the best answer, I would say, first off, there's definitely a type of diplomacy skill that's required that's much more specific than just being able to tell people like, "This is what you're doing, and this is the thing that needs to be done." There's much more of a need to match people to the task and match teams to the tasks that they give, or actually want to do and would be excited about doing.

That's the thing that I think is important to keep in mind and actually functional organization, but the dial of how important that is goes way up to 11 in the crypto space, I feel. Just because when you don't have perfect control over what people do, you do have to rely on people's intrinsic motivations quite a bit more. Another thing is like, I think there's this much more public facing function that's important in the crypto space in general.

The way that I've thought about it is, sometimes, there's this idea that a company has a technical co-founder and a non-technical co-founder. The way that I would extend it in a way that I think realistically applies to anyone these days, but it applies even more so to crypto project, is like you have a technical co-founder, you have an organizational co-founder, and you have a meme lord.

[00:09:36] Dan: [laughs]

[00:09:36] Vitalik: Sometimes you have people, two of those who are sometimes even three of those. The meme lord thing is a role that you have to do and be part of and embrace. I feel like Danny has definitely also done quite a good job of stepping up into that too.

[00:10:03] Dan: There's a lot of talk this year that if you're serious about AI, you need to be an SF. Sort of regardless of whether or not that's true, it does seem that like physically localized talent clusters have been a big part of the history of computing and startups. Ethereum and everyone who works on it is obviously like much more geographically dispersed. I'm curious if you think this is a feature or a bug for Ethereum. Then depending on your answer, what other industries that had success being localized have been even more successful if they were dispersed?

[00:10:31] Vitalik: Yes, I think one factor that's very specific to crypto is just the fact that these are global networks that are not-- like a big part of their entire point is that they're not overly controlled by one country or even one cluster of companies. They are this unique thing that people from all different parts of the world are able to trust, even if they don't necessarily trust each other. That's not a thing that the centralized US tech sector is like really able to benefit from.

I think they're like just that one unique aspect of crypto to the extent to which it's like really still being one of the very few things that's international in this way at a time when increasingly almost nothing else is. I think that creates this extra motivation for the crypto space to try to be more dispersed and explicitly appeal to this much more dispersed community and attract them not much, not just as users, but as developers and just any kind of contributors.

I think putting that aside, in terms of just raw productivity, I think one of the things that's interesting is that it feels like the crypto space has recognized the importance and the value of in-person collaboration, but it's like developed to these totally bespoke crazy institutions to try to achieve that despite being what it is. One of these is the conference circuit. There's this thing where you have conferences happening somewhere pretty much every week and there's people that go to some of these at various frequencies. Sometimes this gets criticized as just wasting time and wasting resources. I think one of the really powerful benefits that it provides is the ability to have all of these different people actually get the benefit of this in-person time together while despite living in and coming from totally different parts of the world.

Then another example of this is within the Ethereum Foundation, we pretty regularly have these retreats. Basically, yes, like something like between 20 and 150, depending on size, people come together in a particular location for one week or two weeks. Often, but not always, it's scheduled to coincide with a conference, but regardless of whether it is or it isn't, people get a chance to talk with each other face-to-face, have this high bandwidth, very big picture discussion about things that are really important to coordinate on, get to be on the same page about, and then go back to remote world and actually execute on those things.

I think one interesting consequence of this is that one thing that is special about the crypto space to me is that there is this culture of collaboration between different companies. Sometimes also between companies and academic groups, sometimes it's between one company and other groups, sometimes it's between companies and the Ethereum Foundation. It feels like it really exists to a much greater extent than I see even in a lot of other places.

I think one of the reasons why is probably because like the structure of the in-person interaction is that people get lots of face time, not just with people who are working in the same place, but also with people working all across the ecosystem. The average person from Skrill has probably met a bunch of people from Polygon and an average person from Optimism has met a bunch of people from Arbitrum and so forth.

I don't have a super rigorous level of evidence for this, but I have a gut feeling in that it would be interesting at some points to explore more that like this aspect of not just companies having company retreats, but also having these more ecosystem wide things really contributes to this spirit of cooperativeness that exists.

[00:15:24] Dan: Yes. On the conferences, this is like a big thing that's maybe a little bit even uniquely big in crypto. How do you think about choosing a conference to go to and what makes a really good one?

[00:15:34] Vitalik: In the way that I choose is basically that I make a choice of which continent or which area I'll roughly be in at a particular time. Because it's like, I just can't be literally flying halfway across the world every week. Then once I've chosen that, then if there are things that look interesting enough in terms of like what kinds of people I can meet, what kinds of both local community people and just like global Ethereum people, what kinds of important things are going to be talked about there and so on. If an event looks important enough, then I go, otherwise I don't go. Then if something is in the wrong continent, then I just default pretty strongly towards not going to it.

I think in terms of what decides the quality of an event, one division there is like, is this a research and dev event or is this one of these big conferences? I think for research and dev events, it's definitely like just having the right people there. If you just put a hundred really people who are bright and who already are working on the important problems together and even if you just put them in some random hotel for a week, lots of amazing stuff is going to come out.

For some of these bigger events, in the crypto space, like the big divide is basically like, is this an interesting conference or is it a chill conference. I feel like every space has its chilly side to some extent, but that's definitely like a dynamic that gets amplified quite a bit in crypto. There's the question of like, is the primary vibe here, people trying to do interesting and meaningful things or is the primary vibe here like, "Hey, I'm going to tell you about this token whose price is going to go up by a factor of eight within three weeks"?

The first look very different from the second. They have very different crowds. They have different focuses. You can just tell pretty easily whether a particular event is more in one camp or more in the other camp. Then, of course, sometimes you have the thing that happens where a thing in the second camp tries to latch onto a thing in the first camp for legitimacy and you have to figure out to what extent that's going on and so forth.

I think from the point of view of someone who is not like very deep in research in dev circles, to me, the main value of a conference is not the presentations because you can always watch presentations on YouTube. To me, the main value of an event is like, can you have good side conversations and meet interesting people that you want to meet there? Sometimes having high-quality presentations is like almost a way of creating an advertisement for that sounds like, "Hey, yes, we do have actually interesting people here and so actually interesting people should feel welcome here."

[00:19:06] Dan: Yes. A question on your priorities. I know you've talked about how your top priorities for Ethereum really haven't actually changed that much over time. It's like scalability, privacy, wallet security. I'm curious, what about your vision? Could you articulate your current view of why Ethereum is important for the world and talk about whether that vision has changed over time?

[00:19:26] Vitalik: Yes. I think Ethereum and the broader crypto space are important because they let us create applications and tools that are very important for our community of collaborative functioning but in a way that doesn't require everyone to trust the same centralized intermediary. I think being able to do that is good because once you trust the centralized intermediary, then there's just lots of ways in which that relationship can slowly turn exploitative over the course of a decade or two decades.

There's lots of examples to point to for this. Even things like social media starting out being very user and developer friendly. Then turning into these closed things where they try really hard to stop you from doing anything except through their interface, then charging more, and then essentially becoming more and more exploitive over time. There's just the fact that there's a limit to what things people even are willing to trust and how reliable particular centralized actors are.

Especially as we've seen what things like money in the financial system, when it depends on centralized actors, then that ends up-- sometimes it ends up just completely failing. There's definitely lots of countries around the world where the centralized institutions just can't be trusted at all. Sometimes you end up excluding large parts of the world and basically only serving very normal western customers.

Sometimes it just ends up being very inefficient, often ends up being worse at privacy. One of the big trends that now we're seeing again is that, with the whole push to a cashless society, that's happening in a lot of places. Your ability to just do payments in a way that it doesn't give your information to third parties, which is a thing that we've had for thousands of years, is potentially very quickly disappearing, right?

[00:21:55] Dan: Yes.

[00:21:55] Vitalik: It's that intersection of applications that let people do things with each other, but without all being under the thumb of one big thing.

[00:22:07] Dan: I guess what I'm really curious, though, if you were to answer that question five years ago, are there any core things that have changed to the vision?

[00:22:15] Vitalik: I think a lot of things have become more specific. For example, five years ago, I was talking about money. I was talking about the thing that would eventually be called DeFi. I was talking about prediction markets. I was talking about ENS. I was not talking about NFTs. NFTs caught me totally by surprise, but I was talking about a lot of those things. Then, now I'm talking about similar things, but the things that I have to say about each individual category are much more detailed. For example, take decentralized social media for example.

You can talk about the virtue of making social media more decentralized in the abstract and say, "Hey, open-source software is good," or you could talk about it in the context of, "Hey, we just realized that all of these platforms are controlled by one company or one guy, and that one guy can suddenly be a totally different guy from who you were expecting. Things could very quickly change. People have a lot of specific examples of what they want to avoid, point to. Then the other thing is that there's specific projects, like Farcaster and ones are probably the best ones. They actually deliver on these ideals that both I and a lot of people have been blabbing about in pretty concrete ways.

If you have an account on Farcaster, then your username is actually an ENS name. It's plugged into this decentralized name system and Farcaster the company can't just go and take your name away, or another example is the whole vision that you have this separation between the content and the view. You have the content, which is just whatever posts that people decide to make, and then you have interfaces that let people actually see the content.

There's been all of these ideas that people have talked about in theory in how if you separate those two, then you can have different people creating views for the same content, and people can have experiment with different kinds of moderation and different kinds of algorithms and filtering. If you don't like one, then it's much more practical to switch to another. If you want to create your own, it's much more practical to do that because you don't have to also build up that network effect from scratch.

These are theoretical arguments that could have been made years ago, but then, you can actually see it in action. You can go to Farcast and you can see a Farcaster content, and okay, it's a decentralized Twitter. Then you go on-- I forget what it was called. It's not Flik, but the thing that makes Farcaster look like a Reddit, I think, Flink. You go on it, and you see the exact same content. It just looks like a Reddit.

You have these different ways of seeing the same thing, and you have actual UI competition within the same ecosystem. It's something really cool and powerful. You can actually see the benefits of that in a very real way. That's something that I would totally not have been able to point to about three years ago. There's a lot of that kind of specifics. Then, within the DeFi space, there's a much clearer separation in my mind between what kinds of DeFi projects are really interesting and what kinds of DeFi projects are just totally pointless. There's within the DAO space I think pretty similar.

One example of a thing that I really believe about DAOs more than I did before is, I think 20% of what's important about a DAO is the governance mechanism, 80% is the communication structure. If you have even the best governance mechanism, but then the ways that people actually communicate, start, and come together on decisions are just totally crappy, then the outcome is going to be totally crappy. Whereas, if you have a community that has the tools for that community to stay aligned on a communication level, then often, even governance structures that are theoretically super crappy can just keep on limping along much better than you can expect. Those would be some examples of my beliefs on specifics that have changed.

I think in terms of bigger picture stuff, I'm trying to think if I have very particular thoughts. Probably a thing that has also become less of an emphasis for me is this idea that you even can create a theoretically perfect governance mechanism. This is something that I feel like I was trying really hard to work on and create a mathematically provable, perfect governance mechanism.

Then at some point, I wrote those big, long blog posts that you might have read from 2019 or '20, where I just realized, "Hey, wait, especially with this whole collusion thing, there's just these fundamental impossibility results. You just have to look at things that totally get different axis of the problem if you want to get better results."

I think it's less about searching for utopia and more about just creating basic infrastructure so that some of these aspects of how we interact with each other technologically can just continue to stay reasonably open, free, and in ways where international cooperation just continues to be more possible. Preserve those things as much as possible through an era where just all kinds of stuff is going to change every couple of years from now, pretty much all the way up until the singularity.

[00:28:51] Dan: A question on some of your intellectual influences. I know that you've mentioned before when you were first getting into Bitcoin, you went through the libertarian cannon, Mises, Ayn Rand, Hayek, everyone. Do you sympathize more or less with their ideas now than when you were first getting into Bitcoin and ultimately creating Ethereum?

[00:29:08] Vitalik: The thing that I definitely sympathize less with is the idea that any of those things are complete philosophies. It's like how a band has the whole A equals A thing, where you're supposed to start with A equals A. Then it could go all the way to, and this is Y taxation is theft.

[00:29:31] Dan: [laughs]

[00:29:32] Vitalik: Look, I definitely have a default suspicion towards any argument of the forum. Like, "Hey, things should be organized totally differently. I have this one line of mathematical logic to prove it." There's a difference between believing in a principle as this statement of propositional logic that's supposed to always hold because it holds by definition versus believing a principle in the sense of like, "Oh, this is neuron number 3974 in my brain's internal neural network, and I noticed that neuron being activated more highly correlates to things that I like, and so I'm going to push for things that activate that neuron more, and push against things that deactivate that neuron more.

That more fuzzy approach to thinking about principles, like one that treats our brains as the idea space that we work with less like a formal logic problem and more like LLMs, is something that definitely appeals to me more. Then you can also go down into the specifics, and there's a lot about those thinkers that those ideas make sense as personal motivation and as like deep social insights, even if taking them too literally would just turn the world into total hell.

There's the famous quote that Ayn Rand's heroes are just a completely crazy and impractical supermen, but her villains are totally realistic. Yes, and I think there are ways in which, I personally-- look, even when I was a teenager reading Atlas Shrugged, I found it deeply motivational. The underlying message that, if you feel terrible, then you should not try to find ways to blame that on society or the world being unfair or systemic and structural badness or whatever. You should just blame it on yourself and focus on how can you change?

That's a lesson that I took to heart then. I feel like Atlas Shrugged did actually contribute to my ability to take that lesson to heart. I feel like it was actually the right lesson for me at the time. Yes, and I think, there's good nuggets of personal philosophy in there in that way. I think, similarly, there's good nuggets of social philosophy in those things. The failure starts to really come when you start taking too seriously the idea that these people are providing complete systems that you're supposed to buy into in their entirety.

[00:32:49] Dan: Yes. If there was a really bright 13, 14-year-old who wants to get into crypto, would you still have them familiarize themselves with the classical libertarians or would you have them start somewhere else?

[00:33:02] Vitalik: That is a good question. I think it's interesting because I feel like over the last 10 years, this is one of the sad things, that's I feel like we've been entering a much less principled age, in the sense that it feels like an indisputable fact that Rothbard and Mises and Hayek and Rand and all of these very deep and systematic intellectuals are just less influential. They have much less of a political camp that is deeply guided by them than they did 10 years ago.

Then you ask, what replaced them? The thing that seems to have replaced them is, like Andrew Tate and the Bronze Age mindset or something. I read these and I just go, like, "WTF, mate?" These are, again, they're things where I'm sure that there's people in particular psychological situations at particular times in their upbringing where it makes total sense for them to receive certain messages. As a just bedrocks of an intellectual vision, they're just, again, totally just random chaos that would--

Yes, there's this question of, is that a trend that you'd want to fight against and go back to older stuff or are there structural reasons why the older stuff stopped being relevant? It's like, you can't go back to the way that things were, and the only way out is through. Yes, the biggest structural thing that happens, obviously, is just-- I think there's two. One is just technology and social media. The other is this big, geopolitical fact that this post-1990s, sort of post-USSR collapse equilibrium where both the US and a whole bunch of ideas are extremely prominent, proved to be a very temporary thing, and it ended up collapsing.

Neither technology nor US hyper-primacy are coming back anytime soon. Unless, of course, some totally curveball-y thing happens with AI that really does bring back AI hyper-primacy or US hyper-primacy or something. So far, that's not the default trajectory. You have to adjust to those realities. I think the thing that I would recommend now to someone is definitely to read things from different perspectives.

I still think that the early 2000s, rationalist canon is super good. The 2010s, the Slate Star Codex canon and all of his posts, especially about the nature of tribalism and the-- especially the idea that you're supposed to, if you want to be tolerant, you have to tolerate things that you personally actually dislike, as opposed to tolerating just things that mainstream society is in favor of and you're in favor of anyway.

There's a lot of these fascinating, deep philosophical nuggets in there. Then on the specifically crypto anarchist and just broader crypto side, there is the original manifesto, the original white paper, a lot of those more specific canons. Phillip Rogaway's The Moral Character of Cryptographic Work is a good one. There's just this broad collection of things that I think are still really good and really valuable reads. I definitely support people reading that.

[00:37:20] Dan: Yes. I'm curious what you think, though, because, I find it really interesting that you mentioned that people moved from core libertarian philosophers over to an Andrew Tate or a BAP or something, right?

[00:37:35] Vitalik: Right.

[00:37:36] Dan: I find it interesting that even Slate Star Codex and the rationalist community, Scott Alexander's got his really famous posts on the anti-neo-reactionary FAQ. People sometimes say that's one of the best introductions to this strain of thought. It does actually seem like it's a little bit seductive for people that start, as a libertarian, they sort of move that way. To me, the core difference seems like it's much more concerned with culture than liberty. I'm curious just what you think is causing that shift and why specifically libertarians tend to get really interested in it and abandon some of the earlier, whatever, gray tribe type stuff you might call.

[00:38:19] Vitalik: Yes, it's a good question. I'm going to see how would I analyze this. Sometimes I analyze these things by taking my own brain and dialing up to infinity the neurons I have that find those things appealing. On the one hand, it subsets. This is the literal definition of apathy, and it's supposed to be good. Then at other times, people have told me that this ends up totally misreading people and being overly charitable. I'll try it anyway.

One aspect of it is, this is the thing that Balaji talks about all the time, that the main axis of politics definitely has shifted from being about economics to being about cultural issues. I think it's easy to find examples of this. If you remember back in 2010, the big issue was Obamacare. Obamacare is about healthcare economics. If you want to make an argument against it, then you're going to be saying things like, "Oh, people should be free to choose whether or not to have insurance because people have the best understanding of their situation and their motivations."

He wants to put them in a position of power over things that are core to their own health. Those are economic arguments. There's definitely this very mathematically appealing aspect to those economic arguments. Those economic arguments do generally-- I mean end up going in a particular direction, which, if you're a particular personality type, it definitely is the libertarian direction.

Then if you start analyzing culture, one of the problems is that culture is just much harder to analyze in that way, right? It's a very anti-inductive thing in the sense that the existence of a theory about culture often itself is like a thing that can kill the validity of that theory. It's harder to analyze. I think cultural issues are also inherently more zero-sum, because if you focus on the economics, then, okay, great, we can all have, if we had better institutions, like bigger houses and better health care and cleaner air at the same time, because you can push frontiers forward, and here's these math equations that show that if you align incentives correctly, that actually happens.

Then with culture, there's these deep vibe-level preferences that each of us have that are definitely, again, much less examined with the same intellectual tools. The kinds of conclusions that you would come to if you just start, I don't know, thinking about culture end up being very different. I do think that in terms of is this switch to a focus on culture good or bad? I think I would say the answer is it has one very understandable part, but also it does have these very bad parts to it.

The understandable part to me is, if you think about it just from the perspective of your personal finances, if you personally had to choose between option A, which is, let's say, the life and the job that you currently have now, and you earn, let's say, $250K a year, and then option B is you work at some soulless multinational bank, and your job is to help one company win weird zero-sum competitions against other companies, and these are just totally random corps selling boats to rich people, but you earn $2 million a year. Which one of those would you personally rather have? Yes, I think for a lot of people the answer is the first one.

Then if you ask the question, would you rather have the fulfilling job in $2,500 a year versus the unfulfilling job in $20,000 a year, then it suddenly becomes very different. I think a lot of people would probably go for the $20K. Once your economic situation is good enough, then it makes sense to reallocate more of your caring toward culture. It's definitely true that libertarian literature just has less to say about culture in part because all of these economic modeling tools that it relies on just don't really fit well to that domain. That's the understandable part.

Then the place where I think this becomes pathological is, one is, there's the whole luxury beliefs argument, which is that if you're a coastal rich kid, then you're at the level where that fulfillment matters more. Then if you start even subconsciously nudging your entire country's politics in directions that optimize for that, then you're screwing over a whole bunch of much more poor people in your own country and possibly, yes, even more poor people in other countries that by polluting their intellectual sphere when what they really need is to continue caring about the economics.

Then there's just the whole short-term, long-term thing where good economics tends to be even better and work out even better in the long term than in the short term. Culture is like the tools that we have to deal with that don't seem to really work in that long-term way yet. Let's see, how would I think about this? Think about the new reactionaries again is they're a very, definitely an internally diverse group in some ways, because I'm just trying to think about the mold bug posts that I've read, very few of them seem to actually focus on culture. They seem to focus on the efficiency of monarchical rule and how the bad guys in historical conflict number 37 were actually the good guys and all of this stuff.

Then there's definitely lots of people who actually do focus on culture. Yes, so I think my analysis for why it's happening, probably, yes, if I had to name one factor that could be the biggest factor, it probably is that the combination of that's what you care about more once you're wealthier and society has adapted to that. We just know that even from people's personal preferences.

Then there's the Internet, social media aspect of things, which is basically that the optics has become a much more, has been increasing in importance with pretty much every passing decade starting from basically the start of the Industrial Revolution. The extent to which you can actually get better results in terms of accomplishing your goals or get worse results in terms of accomplishing your goals by just having good optics versus bad optics in this public space is, I think that was small 200 years ago, bigger 100 years ago, bigger 50 years ago. It's just crazy big now.

I think it's one of those realities that you have to analyze. I think it's one of those realities that rationalists and libertarians are definitely somewhat, find it somewhat discomforting. It feels like sometimes there is a parallel between this discomfort about the inevitability of optics and vibes mattering among that crowd versus the discomfort about the inevitability of incentives mattering among communists. That's probably maximally inflammatory way to put it, but I'm arguing against my own camp, so I'm allowed to.

I guess to summarize, there's like all of the like-- there are these structural reasons that start paying attention to culture more. I think that's unavoidable. I feel like our intellectual culture hasn't really yet found the ways to think about those issues that's healthy and leads to good outcomes yet. That's the thing that I really hope we can try to improve on.

[00:47:26] Dan: Got it. To what degree do you think Ethereum and crypto more broadly is a cultural innovation versus a technological one?

[00:47:35] Vitalik: I think huge, and I think the easiest way to see why it's huge is by comparing between different cryptocurrencies and different blockchains. If you ask people, why are you in Ethereum versus being in Solana versus being in Bitcoin, a lot of the time it is not the technical differences between those platforms that is the reason why. It's about the underlying values that those communities have. The fact that the Solana blockchain currently has this number of validators and it requires this number of terabytes to run is almost less important than, for example, the fact that there seem to be dApp developers that are fully comfortable with releasing dApps where their contracts are closed source.

That second thing really offends people culturally. It's like that cultural offence almost matters more than the technical properties of those platforms. Yes, so I think comparing between the blockchains, it's, I think, really easy to see the really large extents to which these are cultural platforms, even more than they are technologies. To me, that makes sense, especially because, even from the point of view of someone who would think that technology is ultimately what matters, like even if, let's say, you believe that because you think ultimately these things are going to go mainstream, and so the culture is going to regress to the mean and the technology is what stays. Which is like a very good argument.

Even still, I think up until we get to that point, the culture determines the derivative of the technology. It determines the direction the technology is going. If you have a culture that values decentralization, then you know that centralization problems are going to have people that work hard to try to fix them.

On the other hand, if the culture doesn't value that, then the problems are not going to be fixed. Another example of this is like sometimes the intellectual culture of Bitcoin maximalism, for example, like you can easily feel how it's an intellectual culture that is around justifying a thing that already exists and that is already fixed. If a thing is this way, then the engine of the brain is targeted toward creating sequences of characters that make tokens, I guess, that's a new way to call them, that make where once you hear those sequences of tokens, you feel good about the status quo, and the status quo feels correct.

Whereas, I think, Ethereum, relatively speaking, has less of that, and the intellectual culture is more about helping, using ideas as a way of deciding, well, how are we actually going to change the ecosystem, either what goes in the next protocol hard fork or what goes into things on top like ERC-4337 or ZK standards or whatever.

If you think about statements like privacy on Ethereum sucks, that's a thing that lots of people in Ethereum freely say, and the reason why they're free to say it without that feeling like a personal attack on their identity is because their identity is not built around defending the status quo of Ethereum as it exists today. Their identity is built around the vision that Ethereum is driving towards, and the vision that Ethereum is driving towards already has five different teams that are working on some really, yes, "powerful privacy stuff." Yes, and I think that's the way in which culture has a really huge impact in all of these systems already.

[00:51:35] Dan: My takeaway from these last few questions is that it's just culture all the way down. [chuckles] Presumably, CEOs of public companies have a reasonable grasp of what it will take to move their stock price. Yes, obviously, there's macroeconomic factors that are out of their control, but at the end of the day, they increase revenues by X percent, they keep margins at Y percent, they can make some good assumptions about where the stock price is going.

Now, I totally recognize Ethereum is not an equity, but I am curious for you, have you been able to say, if these specific set of events happen, I'm confident the price will do X, or do you even find it yourself hard to understand the forces affecting the price?

[00:52:13] Vitalik: I definitely find it hard to understand the forces. Even just to this recent rise that we've seen, why did it break out this time and not the last few times that it briefly went over $2,000? I have no idea. Why did the SBDC rate drop from 0.07 to 0.055 at this time, but then it didn't drop at that other time? I have no idea. Why did it go up from 0.03 to 0.07 in the first place? I can come up with a story in my head about why that's a pretty natural reaction to an overcorrection, but ultimately, I have no idea. I think my brain definitely does treat them as these weird demonic forces, and there's an aspect to which you just have to accept it as it is and go along for the ride and just do what you can to make the ecosystem healthy without thinking about how that affects the price too directly.

[00:53:20] Dan: Got it. Let's talk a little bit about AI. You have a really great post that came out very recently about what you call defensive accelerationism, or DIAC. I'm going to ask some questions about that post, and I'll link to it in the show notes and recommend everybody go read it. First one that came to me on language. You famously learned Chinese pretty quickly and know a bunch of languages. Do you think the ROI of learning a new language will still be worth it even if AI reduces all of the friction to speaking with someone in a different language?

[00:53:51] Vitalik: I think ROI of learning languages has definitely decreased from where it was 10 or 20 years ago by quite a bit. I think it always ends up taking longer than people expect to actually decrease in response to some new technology, but it is definitely lower. My advice at this point is, if you're in the US, then try, Chinese is good, Spanish is good, and, probably, yes, stick to those if you're the type that likes challenge and, otherwise, focus on other things.

Five years ago, my advice would definitely have been more ambitious. I'd say, yes, my hope is, and this gets into some of the ideas that I wrote here at the bottom of that post is, at the same time as technology makes it easier for AIs to do things, technology could make it easier for humans to either do things or learn to do things, and so I've been hoping that we're going to get magically, significantly better learning environments for whether it's language or whatever else.

I think language is just an easy test case because it's so easy to define what the thing that you're learning is, and it's so easy to, relatively speaking, measure proficiency. It does feel like language learning gets a few percent easier every year than with every previous year, though, the slope of that is definitely quite a bit lower than the slope of how quickly pure AI is getting better.

[00:55:52] Dan: I guess one other way of phrasing this, actually, is, if literally the cost of talking to anyone in any other language goes to zero, then the effect of--

[00:55:59] Vitalik: Then you should just focus your attention on other things.

[00:56:03] Dan: Yes, it becomes learning Latin or something, right?

[00:56:06] Vitalik: Right, exactly.

[00:56:05] Dan: I guess my question was just, from the experience that you've had, is it still worth going to learn Latin or some other language just for the sake of learning? Have you gained any non-communication benefits from it?

[00:56:16] Vitalik: Yes, it's a good question. Yes, I think there definitely is value from learning one or two that's other than raw communication ability, which is just, it gives you a much better grounding to understand how language works in general. It gives you better grounding to understand, to be able to think about a topic without English language cultural associations immediately seeping in. There definitely are these other forms of value that exist, but yes, it definitely becomes much as smaller than in a world where you don't just have magic instant translators everywhere.

[00:57:06] Dan: Got it. You summarize your DIAC post at the end, and one of your calls to action is that you should build and build profitable things, but be much more selective and intentional in making sure you are building things that help you and humanity thrive. My question to you is, how do we practically incentivize this, and is that actually really different, or how is it different from people who are calling for AI regulation? It seems to me that's sort of what they claim they're trying to achieve. What's the nuance there that makes your call to action different?

[00:57:39] Vitalik: Yes, it's what are the tools that we can use to actually achieve that differentiation, I think, is definitely one of the most important questions here. I think one of my answers is definitely that we just need much better forms of funding for some of these other alternatives. We need to identify much better ways, for example, the old problem of, how do you incentivize better open-source software to get created.

That's been a problem that the open-source software space has been really struggling with for a long time, and that's something where you can definitely extend that question and also ask, well, if direction X is more risky and direction Y is less risky, and it looks like direction Y is just insanely underpowered, then what can we actually do to accelerate that direction Y more?

The tools to do that are-- they're definitely much more expansive than just government action. One example of this is that if we decide that, let's say, brain-computer interfaces and eventually uploading is a better path to superintelligence than by making bigger and bigger supermodels, then the problem is, the supermodel space is a space that is accelerating very quickly. There's a lot of billions of dollars going into it, and meanwhile, in this other brain-computer interface and interacting with and understanding and dealing with the human brain space, there's a much smaller amount of funding going in.

You could try to solve that problem, or you could try to decrease regulation in that area, or you could try to rather get developers and builders actually more excited about working in that area. I think that acceleration side of things is one big aspect. I think the interesting thing about this is that there is a lot of these spaces that really could be significantly improved with even relatively small amounts of funding. Even like low hundreds of millions of dollars, and we could have like much better pandemic prevention. We could have a much better brain-computer interface space. We could have much better secure operating systems.

These are areas where like that level of resources is just not going into them yet because there just is so little incentive. That's a thing that is quite easy for people to get in and, if they have resources, go and start focusing on them. The challenge I see with a lot of these regulatory approaches is like, the thing that I think the traditional effective altruist approach is the least good at thinking through properly is the question of like, is your impact even going to be on the right side of zero. I think one of the reasons behind that weakness is that if you look at old school AI, like malaria nets, deworming the world, give directly, and all of those things, it's like hard to imagine how those things could make the world worse.

The question is like, well, do these things improve the world at a rate of like one life saved for $4,000 or one life saved for $10,000? In either case, it doesn't really matter too much. You just like go and throw a bunch of your money into that thing. When you can be reasonably confident that your effects are on the positive side of zero, then there's like a whole set of intuitions that follow as a result of that like carefulness becomes one of the most harmful things in the universe because it creates delays, and delays are the invisible graveyard.

Then if you start going into politics, the challenge is like, politics is just a very anti-inductive game in so many ways. This is one of those examples where like this idea that if people are aware of a theory, that itself might make the theory less true. There's different versions of this. Sometimes if people are aware that some people are acting according to that theory, that might also end up counteracting that theory.

For example, imagine two possible worlds, one world where SBF was an effective altruist and is what he is, and another world where SBF is equally scammy, equally narcissistic and attention seeking and all of these things, but he's not an effective altruist. Let's say instead he is like, I don't know, a Chinese nationalist or like some totally random thing in the opposite direction. In which of those worlds would things be better from an effective altruist perspective? I think the answer totally is the one where SBF had never heard of effective altruism or thinks that it's totally stupid.

Yes, and one of the things that I think offended people about SBF the most deeply is like his forays into politics, how he started giving money to politicians. He started even actively campaigning in his direction in terms of how crypto should be regulated, which is like really not aligned with the direction of the rest of the ecosystem and all of these different things. A lot of that ended up really massively turning people off. A lot of that ended up even harming, even causes that he likes and that we like in all kinds of ways.

I think the open AI situation is similar in that respect. Because in the open AI situation, what you basically have is just the raw optics of it, that you have five people who are totally unaccountable, where people have never heard of them. The people's first reaction is like, wait, what the hell is this team? What the hell even is this like weird corporate structure that gives power to these five people?

Then the first thing that people hear about this is like, these five people have killed the CEO, not figuratively, and are threatening to destroy this amazing company that hires all of these bright people and just making this AI thing that we know and love happen. That's people's first public impression of this entire governance structure. If you do that, then like, yes, people are going to hate effective altruism. People are going to hate AI safety. These are like very understandable in human reactions.

These are all concerns that become real once you start wading into this domain of politics and this domain of like influencing large-scale incentives and influencing behavior instead of just being in a corner and doing things yourself. It feels like people are totally not ready for that. I think this is one of those areas where something like an approach that isn't just about like, we have to pause, we have to regulate, we have to slow things down, but that actually goes and says like, what is an actually positive vision that a critical mass of both builders and the general public realistically can get behind and that does have a reasonable shot of like actually being viable.

Let's try to figure that out and get people excited about that. That just feels like a much better strategy than just focusing exclusively on the pause direction. Yes, that's from how I think about the difference from just like a public opinion and my perspective.

[01:06:44] Dan: Got it. You've noted I think a couple of times that your P(doom) from AGI is around 0.1 or 10%. Is there any one thing that could like a specific thing that you could see happen that would shift your P(doom) from AGI to less than 1%?

[01:06:59] Vitalik: Good question. Some very effective, very convincing rebuttal of the theoretical arguments for why AGI is uniquely dangerous would definitely do it. I can't tell you what form that rebuttal will take, because I feel like if I could even give that form that I'd be 90% of the way there to actually having the rebuttal itself. What other things? Obviously, us getting to even better than human level AGI and things continuing to coast reasonably normally.

I think a lot of the arguments for doom, they do hinge on this discontinuity that happens once the AI gets above human level versus being below human level. I think there's two discontinuities. One discontinuity is like this whole recursive self-improvement thing. The fact that the AI actually would be able to outsmart people and hide from people while rapidly copying itself in all of these things.

Then the other discontinuity is like, there's a class of possible worlds where it's very easy to get up to roughly human level, but you just get stuck at that point. Suddenly it takes much longer to actually get above that point. The way that that world would look like is basically a world where it turns out that it's very possible to replicate patterns of behavior in some abstract sense that already exists and that people have already been doing and like generalize those and automate them.

There's some planning capability that humans have, and especially planning in unexpected situations that just like for whatever very fundamental reason is just like a much harder thing to achieve. We just end up needing a lot more effort than expected to actually reach it. I feel like there's definitely the sprouts of evidence of something like that being true. I feel like at least to me, the progress, like I expected a AI progress in 2023 to be slower than it was in 2022, just in terms of like a, how do I feel about how much has changed since?

It feels like 2022 was like a big zero to one year. The biggest thing in 2023 is probably, it's like a catch-up year for the open-source ecosystem, which I think is great. As I've said in the post, the biggest non-doom risk that I am concerned about with AI is like the centralization risk.

The nice thing about the open-source AI space and especially all of these models that you can go and run on your laptop, which I love and I've totally played with, is if an AI breaks the world, it's going to be one of the big ones that's made by a big, big corporate military. It's not going to be a random guy on his or her laptop.

Then on the other side, it's if we want to reduce the extent to which this will just hyper centralize everything, then you need something that's an answer to the power that these big models provide and open source or open weights, I guess is the better way to put it, AI models running locally is the way to do it. What we've seen is it has been a catch-up year for the open weights model ecosystem. We haven't seen comparable leaps of, wow, amazing progress in the same way that we did in 2022.

It feels like there's more and more people starting to say that LLMs are good at replicating patterns of things that have been done many times. They're much, much less good at extrapolating and creating and thinking through fundamentally new categories of things. It could easily be that this is just the limitation of LLMs and there's one or two more technological breakthroughs that we just need to break through before we get to superhuman AI.

Then the question is, in that world, how long do those breakthroughs take? It could be five years, or it could be 50 years. The longer that timeline is, the lower my P(doom) is. If we're three years from now, it feels we're entering an LLM plateau and there's nothing even more dazzling around the corner, then my P(doom) would probably drop. It would not drop to under 1%, but it would definitely drop by-- It could be two or three percentage points or so.

[01:12:17] Dan: One part of the post is you seem fairly optimistic about brain computer interfaces, at least if they're possible and useful like a path to saving ourselves and making sure AGI goes well. I'm curious of your views, people like Robin Hanson, they expect human descendants to be super weird and very different from us. He's got that book, The Age of Em that explicitly outlines a vision for this. How important do you personally think it is that we conserve at least some of the things that we value today, well into the future, with our descendants?

[01:12:53] Vitalik: I definitely think it's important. I get the desire to say, "Oh, we should be open-minded, and if this is the next stage of human evolution, then we should embrace it because if we don't and we stick to present day values, are we not committing to the same sin that the old curmudgeons who dislike homosexuality are committing?" for example. [chuckles] I get that desire.

There's a difference between things that happen within the distribution of human behavior and things that are way outside of the distribution. If you think about The Age of Em world, as he describes it, then one of the risks is that competition is basically going to compete away consciousness. At some point Ems are going to stop being conscious. To me that would be terrible and that would be an example of something that I would not want to see about our world in the future.

Even still, the thing with Robin Hanson's world to me from the perspectives of-- It's a world where things could easily be much worse. If you think about it, it's not a world where one super intelligent AI kills everyone. It's also not a world where we have a hyper decentralized world government. It's not a world where humans are pets. It's a world where humans have a path to continuing to be meaningful, frontier actors in the future of our galaxy.

It's like a world that really does actually manage to evade lots of dystopias. At the same time, again, definitely, has its risk, which is basically as Robin describes it, the whole set of problems of the Malthusian world is just coming right back with a vengeance. It's not perfect. That's the default passion. There's definitely a lot of people that would breathe a sigh of relief.

[01:15:13] Dan: Have you been tempted to focus more of your time on AI just given some of the effort you put into that post and thinking about it a lot lately?

[01:15:20] Vitalik: I definitely feel the need to just make sure I understand the space. Part of understanding the space properly definitely is becoming an actual user. One example of this is I got this insight recently that a current AI drawing tools are excellent at making an image if your goal is to make something that dazzles people. If your goal is to make something specific that you want for a particular purpose, then they're just terrible.

You have to fight them and do 20 rounds of in painting and do all kinds of stuff, you fumble around, let's call it, AI so you don't understand. It gets much harder. That's also a thing that I noticed that was true with the GPTs as well. The time when the GPTs got good enough to seriously impress me is way before the time when the GPTs got good enough that I felt comfortable using them and trusting their output.

That's an insight that I only could have gotten by being an actual user. I definitely have been playing around with these things. I have this Python script open where I try to actually run some of these diffusion models locally and see if I can use them to draw things and basically see to what extent I can actually do things without shipping all of the data about what I'm doing to large corporations and all of that.

Doing my part to stay up-to-date with both that space and with the AI space itself and with people's concerns about safety and alignments and what people are who are working on those issues are thinking about and worrying about. I definitely think it's important. Though I'm definitely at the same time not doing the stereotypical thing of quitting crypto for AI or whatever. I just don't think that makes sense for me as a person to do.

[01:17:34] Dan: Last question here. If Ethereum succeeds in your version of its mission and the vision that you have for over the next 10 years, what would be the single most likely causal factor?

[01:17:44] Vitalik: Single most likely causal factor in Ethereum succeeding? I just have to say continuation of trends. Basically the same underlying reasons that made adoption and interest keep increasing over the last 10 years just end up continuing for another 10.

[01:18:07] Dan: Would you say it's more about risk mitigation than it is about like any specific change?

[01:18:14] Vitalik: One aspect is risk mitigation. Another aspect is making sure that usability and scalability actually are there in time for when a much larger group of people wants to actually use the check because if the technology isn't there, then the next bull market is going to be a disaster for Ethereum. Transaction fees are going to go up to $500 and people are going to go back to hating crypto again. Scalability and usability do need to be solved. The good news is that we are much further along now than we were a year or two at solving them.

[01:18:49] Dan: That's a great question to end on, Vitalik. Thank you so much for your time.

[01:18:51] Vitalik: Thank you too, Dan.

0 Comments
Undertone
Undertone