Zvi Mowshowitz

Zvi Mowshowitz

AI and strategy games

No transcript...


(0:00:00) Intro

(0:00:43) What makes a good strategy game?

(0:05:29) Culture of Magic: The Gathering

(0:10:14) Raising the status of games

(0:13:31) First mover advantage in LLMs

(0:18:23) Consumer vs. enterprise AI

(0:21:28) Non-technical founders

(0:25:24) Where Zvi gets the most utility from AI

(0:28:56) Straussian views on AI risk

(0:36:18) Is AI communist or libertarian?

(0:44:50) Dangers of open source models

(0:47:18) How much GDP growth can realize from today's models?

(0:49:40) AGI and interest rates

(0:58:22) RLHF impact on model reasoning

(1:00:42) Bayesian vs. founder reasoning

(1:04:15) Zvi Mowshowitz production function

(1:10:16) Is AI alignment a value problem?



[00:00:20] Dan: All right, today I have the pleasure of talking with Zvi Mowshowitz. He writes a Substack called Don't Worry About the Vase, where he shares what is probably the Internet's most comprehensive update on everything that's happening in AI each week, along with a variety of other interesting topics, including rationality, policy, game design, and a lot more. Zvi was also one of the most successful professional players of Magic: The Gathering and went on to become a trader and startup founder. Zvi, welcome.

[00:00:45] Zvi Mowshowitz: Thank you. Good to be here.

[00:00:46] Dan: All right. First question, you played Magic: The Gathering professionally for several years and were inducted into the hall of fame. There's this interesting post I read the other day by Reid Hoffman, where he talks about what makes a good strategy game for application to life. His core idea was that when you think about games like chess or Go, which require mental focus and dedication. They don't actually teach you to be strategic in ways that match how the world works. There's no outside variables, like luck, weather, or external market forces. You're just like memorizing the best move in any given situation. My question for you is, what are the key characteristics in your mind of a good strategy game?

[00:01:24] Zvi: There's several different questions here. There's the question of what is a good strategy game, there's the question of what is a good strategy game for the specific purpose of developing particular life skills or life skills in general, which is more what I think that Reid was talking about in that statement. I think he's selling chess and especially Go short in the sense that if your plan for Go is to memorize all of the best moves on a 19 by 19 board, where each move involves picking one square and where there is no obvious causal link on many moves where you're forced to do anything, then you're going to have a really bad time.

I think that this is the reason why people thought that AI was going to have a really hard time with Go and why it's significantly harder for AI than chess or traditionally viewed that way. You don't get good at Go purely for memorization anymore than you get purely good at life for memorization. You get good at Go by understanding the principles and figuring out how to think about Go and how to relate to previous positions. Even games like chess and Go, even though they technically have no luck, effectively do have substantial amounts of luck because exactly what point you're facing, how they choose to move, what happens in the game beyond your amount of [unintelligible 00:02:39] that you're able to look forward into the game are things that you have very little control over.

Effectively, if two players have similar skill levels, sometimes one will win, sometimes the other will win, sometimes it will be a draw even if they are both comparatively on their game, so to speak. They won't just all be draws because the players are equally matched. You certainly learn certain forms of strategic-ness from a game like that. It's just very narrow compared to what a game like Magic can teach you. Magic definitely has these other aspects where the game's components and background and rules are constantly changing, and you have to take into account completely unexpected dynamics where you have unknown unknowns, and the game can throw anything at you at any time, and you just have to continuously adapt.

I definitely think that helps you a lot, and it makes for a better game over time in other ways as well, as does the luck component being more present. The biggest problem with chess and Go is if I play you in a game of chess, chances are very high, even though I have never talked to you before, that either you will crush me every game, or I will crush you every game. I don't know which one because I don't know if you're any good at chess, but the chance that you are within about plus or minus 200 or 300 points of me is not that high. On the standard Elo scale, 90% plus of chess players will not be able to give me a good game because either I will crush them or they will crush me.

That's true for basically every chess player. There is no range where a lot of chess players are, which is not how it works. Go has a better handicap system, so you can use handicapped stones to create a reasonable game much better than you can handicap chess, but if you don't want to use that, then you have even worse of the same problem is my understanding. Certainly would be pretty bad. Whereas Magic, I can play almost anybody, and they have a chance. The game can still be interesting in some sense. Taking my win percentage from 90 to 95 or 1 to 10 is an interesting challenge, even if I am either very undermatched or very overmatched in that situation.

There's always new things to be figuring out, new aspects to consider. Magic players have gone on, in my experience, to do very good jobs at a variety of other games and other challenges that have nothing to do with gaming, but it's always hard to differentiate. I think it develops a lot of great skills, but also I think it attracts a lot of great minds who are very skilled, very talented, very motivated, and they tend to be underappreciated by the outside world and supernaturalism. It's very hard to differentiate how much of that is Magic makes you awesome, how much of that is you already were awesome, that's why you played Magic, and how much of that is the other players in Magic were also awesome and you had to hang out with them, and it's similar to the way you network in college.

[00:05:30] Dan: While prepping for this, I was asking ChatGPT like, "Why isn't Magic more popular?" One of the reasons it gave is potential cultural perceptions. In its words, basically, poker is associated with gambling and James Bond and suaveness. Chess is heavily associated with just raw intellect, but Magic is a collectible card game with fantasy elements, and that's not a wide audience that is likely to be interested in that. In your view, would the game benefit by experimenting with branding and remarketing while keeping the same general outline and rules, or is the cultural context part of actually what makes it special?

[00:06:09] Zvi: First of all, there's always many reasons why something isn't more popular than it is. There are reasons chess isn't more popular than it is, there are reasons why hamburgers aren't more popular than they are and so on. There's reasons why motherhood and apple pie aren't more popular than they are, even though they are very popular. Magic has actually still, until at least very recently, been growing steadily in popularity, and the number of players who have been playing it, I haven't been following in the last year or so, the numbers. Magic fell off of people's cultural awareness radar screen, but became much more of just something that everybody does in the background more often than you would think.

We're recently seeing a new renaissance of game stores in New York City, where I live, that partly reflects this. The reason why Magic cards have gotten so expensive is because there's a fixed pool of older cards that are now desired by a much bigger pool of players. The difference is that we've shifted from competitive gaming, which is often easier to notice, in some sense, to the primary way of playing being Commander, which is a four-player more casual style of mode. That mode is more often played around a kitchen table, it's played more casually, and therefore it just isn't noticed, it isn't a cultural phenomenon in the same way, but it's still happening.

I think that when you look at The Magic cards and The Magic setting that a lot of what you do in Magic is built around the intuitions we have surrounding, "How would this concept work in this type of world?: The reason why we love fantasy in general is because fantasy settings give us natural metaphors, natural ways of expressing the things that feel natural to us, and the things that we want to do can always be expressed. Why are anime settings constantly injecting weird magic into them when the story you're trying to tell doesn't necessarily have anything to do with anything magical? Why does magic just keep showing up in people's stories and novels and such when it doesn't have to be there?

It's because it's a crutch in some sense. It's a storytelling tool that lets you invoke things without having to exactly explain the technical mechanisms, but they feel right, and that Magic is, in fact, a very, very convenient setting for we just want to be able to do whatever we want to do and have good metaphors for it. If you want to learn something, having good metaphors, having an intuitive understanding of what it means is much, much better. I am physically unable to learn foreign languages in a reasonable way because my brain is not set up to memorize arbitrary facts, and just this set of sounds corresponds to this noun or this verb is to my brain an arbitrary set of facts.

Magic I am able to memorize thousands of cards a year, which are effectively new words, and barely even blink because they make intuitive sense. They all relate to each other. There's pictures that are illustrative. The concepts have names that evoke what they do, and it all ties together. My brain is able to synthesize that. I don't think you'd want to shift it away, and some people go, "Oh, this is silly. It's got pictures of elves and dwarves and flying unicorns and whatever else they've got. Sure, of course, but people also love that stuff. They eat it up. I don't think that shifting it would be a particularly good idea. I think that you have to go to war with the army you have.

You have to use what tools are available. I think they've done a very good job of that. Magic's biggest challenge is basically sustainability. As we print more and more cards, and we've used more and more of the low-hanging fruit in terms of the names of cards, the concepts of cards, the mechanics, and a lot of players have

seen more and more of that over time, "Can we still do something that is fresh and new while not being too complex, while still being accessible, while still being strategically interesting and balanced?" When that challenge just continuously gets harder every year.

[00:10:16] Dan: Yes. Maybe another way of phrasing this question is, at the margin, would we benefit from just raising the status of strategy games and esports in general? I think it's actually been popular recently where people say a great person to fund for a startup founder is someone who was world-class in any sort of esport because it take some amount of intellect, but also it takes a lot of dedication and willingness to just figure things out to get to the top of your game here. Do you think that we should try to raise the status even higher and this should be something we encourage?

[00:10:49] Zvi: I find even higher to be a funny way of describing the status.

[00:10:54] Dan: Maybe tell me where you think it is now and where you think it should be.

[00:10:57] Zvi: I think it's in a better spot than it used to be, but the answer to your question is yes, just yes. Very emphatically, raising the status of intellectual competition, raising the status of just trying to be the best at something, of trying to accomplish something, of working hard at it, of just being the best or trying to be the best, even if you don't succeed at being the best, but having that mindset, learning those skills, being part of that culture, working on that training, absolutely, we should massively raise that. I think it's much better training than what we typically have young people do to develop their skills, develop their capabilities. Also, I would say it's not just esports. I would also raise the status in this way of sports. If a successful professional ball player comes to you and says, "I want to found a startup," they're not stupid. They work really hard. They know what competition feels like. They know what it means to beat the odds. They know what it means to stay up at night every night working harder than everybody else and to not take anything for granted, et cetera, et cetera. Yes, you tell me any professional NFL quarterback, I will maybe have a brain scan to make sure they haven't taken too many head injuries, but subject to that, yes, say, "Shut up and take my money and go do your thing. I don't even care what it is." It's not just us.

I would extend that also to basically anything. Just be really good at something. Then having been really good at something hard, something that requires struggle, something that requires lots and lots of training and effort, even if the prizes were not so good, that means that you were intrinsically motivated to do that. That tells me you're probably going to be really good at other things. I avoided so many other things. In show business, it's like we see people who are good at one aspect of the business usually end up being really good if they put their minds to it at other aspects of the business. Why are these actors good at directing? Why is there an overlap here?

Partly it's because they pick up a lot of knowledge from acting, but partly it's just because you succeed at one trade that's really competitive and hard. It means you've got what it takes to learn something else. Therefore, you sing a bunch of songs, and even if you didn't start out that way, you often end up writing them. Many other seemingly much less related things work the same way. I would say, yes, show me a speed runner who's really good at speed running, and I'll show you a good founder.

[00:13:31] Dan: How important do you think first mover advantage is going to be for LLMs? I feel like a common analogy here is like Google Search. On the consumer side, especially today, there's like just no reason why someone can't go create a replica of Google, and it could probably even be a lot cleaner with less ads and give you maybe more favorable results. We could debate like the nuance there, but it doesn't seem like too challenging. Google is just the de facto default. They have partnerships with Apple so that they're going to be like the go-to on Safari whenever you open your app. For 95% of people, that's just going to be okay. Where do this playing out with LLMs over the next few years?

[00:14:09] Zvi: I would dispute that it's easy to do as well as Google Search at search. Despite all the difficulties, it has considerable effort and put into a number of competitors. Those competitors keep not taking that much market share. I know a lot of people who are constantly complaining how bad Google Search is for them and how much it's not what they want and who would absolutely try any number of additional search engine, in fact, are using various LLMs effectively as new search engines because they're unsatisfied with Google Search. If somebody came to me and offered me a superior product, I think I would quickly take it. It was actually better at my purposes.

I tried Perplexity, and it was a pretty good hybrid of a search engine and an LLM. I used it for some things, but then over time I learned that for many purposes, just Google Search being instantaneous, being natively tied to the web like it is, giving me the links in an easy form, it just is the product that I actually want to a large extent. Sometimes it's not, but often it really is. In that cube, no one's done better. For LLMs, I think there's very little lock-in. At first I was using GPT-4. first GPT-3.5 and then I got access to a GPT-4. I was using Bing. Then I was also trying to use Perplexity. Then I was experimenting with other stuff. Now, I use a lot of Cloud.

Periodically, I try to use Bard and I see if there's anything to use to do with Bard. Because of the way that LLM architectures work, the things that you learn working with one LLM right now, we're just talking about the consumer experience in the near term, not like the long-term effects. If you build a better product, I don't think there's much lock-in at all. I think it's very, very easy for the consumer to move. Business product, I think there might be a little bit more of we've trained on the exact quirky details of how to get good outputs for our purposes out of this LLM. We don't want to necessarily move this other LLM.

That's going to happen anyway when GPT-4 becomes GPT-4.5 becomes GPT-5. You're going to have to upgrade. Your LLM is never going to be good enough in its current form for very long. They're continuously training them in ways that disrupt the current instructions. People complain about this in chat GPT. They say, "It got worse." No, it got worse at giving you exactly what you wanted from exactly the thing that you trained to do exactly what you wanted. The same way that like, "5 years ago I ordered these 10 dishes from these 5 restaurants. When I try to order exactly these 10 dishes from these 5 restaurants again, the experience got worse." Of course, it did because like one of them closed, and one of them changed its menu.

Things that improved, you're not noticing. The quality of life in general went up, but your experience of trying to replicate exactly what you had went down. I talked about similar things in the immoral maze sequence in weird ways, where any given thing is almost always getting worse in the exact way that you were previously using it. It doesn't mean the world is getting worse, the world is getting better. As long as we allow creative disruption, that's going to be true. What's going to happen is continuously, the LLMs are going to improve. Every time they improve, at different times, different people are going to be ahead in the game, unless open AI, it keeps knocking it out of the park and never has a rival, but that's not what we by default expect.

We expect Google at least to give them a run for their money at various points. The big question to me is, does the first mover advantage then lock in enough users early on that gives you more data about what users do, about what users want, about what is a good and bad response. You can then use that data advantage to create the best next generation product and keep that advantage in a practical sense. That's something we're going to learn over time. My guess is that you can just pay for feedback and user data that gives you exactly what you want. The amount of money these companies are willing to throw around are off the charts huge. It's not that hard to gather a lot of meaningful feedback data, and you can cheat and use feedback data on other LLMs, on your own LLM. I think in the near term, no, it's going to be pretty competitive.

[00:18:24] Dan: What do you think is more important though for staying in first place? Having the consumer market or the enterprise market?

[00:18:31] Zvi: My guess is that right now,. long-term the business market will probably be the bulk of queries if I had to guess simply because we're going to start having lots and lots of businesses that like use it constantly internally, their employees are using it day-to-day lives, but because of the need to protect the data, they're using it in an internal specialized version rather than the general version. Every time you go to a-- where you would currently talk to a customer service rep, you're talking to an AI and constantly places where you currently aren't using intelligence, businesses are incorporating AI into their products.

All of this might potentially overwhelm with the uses of chatbots and also has a lot more lock-in. If I was like looking to build a moat, was looking to build a long-term future, I'd be worried more about corporate relationships than I would be about the consumer-facing business. Except in so far as the consumer facing business causes you to form business relationships.

[00:19:22] Dan: Yes, it's interesting. I might've gone the other way. I don't have a super strong view here, but with a Google analogy, I guess is what I was getting at on the beginning where it's once you have a consumer product that's just a winner, unless you really blow it out of the market with something that is clearly better, consumers tend to have not that much motivation to switch their workflow, at least the median or like typical consumer who isn't thinking about these things all day. Whereas the enterprise, if you figure out a way better product, yes, there's some degree of lock-in. It's a little bit easier to just walk up to them and say like, "Our product is better. It saves you money. It's going to improve these metrics in your business.

You should switch and get them to do it."

[00:20:02] Zvi: the business has to do a lot of work. The business has to actually train a new fine-tuned, potentially, model. It has to learn how to transpose all of the queries. It has to do all of its safety work over again. It has to do all of its, "Make sure this gives the exact answer we want to give to all of these queries," works over again. I think that the difficulty of switching here is going to be, in many ways, non-trivial. The consumer, on the other hand, is a very sloppy product that can afford a lot of slack and error. Where switching over is literally just going to a different website with or without paying a subscription. Right now, only ChatGPT really requires a subscription to get the consumer experience that you want. GPT-3.5 is already pretty good and free even then. You can get GPT-4 from Bing if you really want to, again, for free. Yes, the question is like, "How much is this consumer habit going to be a force more powerful than actual switching costs," at that point. Right now, my guess is it's not very high because the people who are using the product are early adapters. The future is very evenly distributed. Most people still don't really understand what ChatGPT is. You'll see the occasional sign on College GameDay that talks about ChatGPT or the occasional joke that insults someone for being like ChatGPT. People like, "I understood that reference, level of knowledge," but most people still haven't actually tried it.

[00:21:30] Dan: To me, at least the most impressive use that I've gotten out of ChatGPT has by far been just coding. I'm a total noob at coding. I don't really know what I'm doing. Asking just basic questions about, "How would I create a flask app in Python? Can you teach me about databases? It'll just give you a step-by-step, "Here are the 10 things that you would do. This would take a typical programmer two weeks." I'm like, "Let's start with step one, break this down into 50 more steps, and then let's chunk it and, just show me the code that I need to input." Then if you copy and paste an error, it tells you exactly why it thinks you're getting the error.

It understands what IDE you're using and, how to change the settings. It's really miraculous. Do you expect there to be, on some length of time, a massive influx of software founders who realize that, "Oh, I can actually just get an MVP at the door and learn to code in about a tenth or a fifth of the time that I was able to do two years ago." You have people that are maybe more traditionally a McKinsey consultant or work at an investment bank or something like, "The barrier to entry is now, much smaller. This technical co-founder or really technical person that I used to need, that I still need them to join my team at some point, but I can get started on my own." Do you think that this will cause a change with people that aren't natively in tech trying to build software?

[00:22:52] Zvi: Yes, I think that we absolutely will. I went from, "There is no way it would make any sense for me to ever try and code almost anything that isn't deeply, deeply simple," to, "If I wasn't so goddamn busy every minute of the day, look at all these cool tools that I think I can just build." I get tempted, even so to be like, "You know what, what if I just did a hackathon for a weekend even though I have no idea what I'm doing?" I have the architecture skill. I am very good at figuring out how computer programs are supposed to work, how they would go about doing a thing.

I'm just really bad at writing the actual code. I have something at least 10 X, probably, liability to code meaningful things within a given period of time without actually getting a coder to do it. It's plausible it's 100 X, but it just went from basically you can't to "No, really you just can." It's just that I have so many other things that I want to do constantly that are fighting for my attention.

[00:23:56] Dan: Yes. I spent a couple of weekends on it a little while ago, around when GPT-4 became publicly available if you're paying for it. Yes, I had the same experience. I was like, "I couldn't code before." I wouldn't say like I'm a good coder, but like it gives you the ability to actually build something in a short period of time, which is pretty astounding.

[00:24:14] Zvi: Yes, if I had my current level of coding ability with GPT-4, 5 or 10 years ago, at the time that I was trying to code things, I would have been a very, very good coder very quickly compared to everyone else who didn't have it. I think this leapfrogs you pretty fast.

[00:24:30] Dan: Yes, and even if it's not actually directly writing the code, it explains things to you if you're getting an error or something, which in a previous life could take hours for you to get unstuck.

[00:24:39] Zvi: No, the biggest thing is just it will actually answer your questions for real. You can ask clarifying questions, and it will tell you and you can sort things out. Especially back then when I was trying to do things that were very much data processing, building in algorithms that did things I wanted to do, not requiring anything that had happened in the last two years, essentially, it would have been very, very good at helping me through almost all of that. Except for the part where I was scraping websites, it would have been just amazing and the scraping would have just been like, "How has this thing changed since its data cut off?" If it hasn't, we're in great shape. If it has, we'll see how well it reads HTML.

[00:25:25] Dan: Where do you get the most mundane utility today from AI?

[00:25:29] Zvi: I think coding is where I would get it if I was integrating over all of the potential things that someone like me might be doing or something like that. In practice, where I get it is just asking questions, there's something I don't understand that hadn't happened. It doesn't involve a web search. If it's just about like facts about the universe, understanding things, trying to draw parallels, just checking intuitions, getting explanations, really basic stuff. I think that's where I get the most utility out of the system right now. It's really, really good at that.

Now, if Dall-E 3 represents a quantum leap in usefulness of image generation in a way that I don't think people have fully appreciated yet. We went from I can generate images, but I can't get the thing that I want. I can get some vague evocation of something the thing I want to, "No, I can get the thing I want," actually pretty close with amazing quality is a pretty big change. I am looking very much forward to the world in which I get good at that over time, and the world in which we get the version of Dall-E 3 that isn't safety bound. That I can do the things that it will refuse to do, like give me a picture of a public figure, for example.

[00:26:58] Dan: Yes. You seem like pretty optimistic about all the things that you consider, the non-existential FOOM scenario, just mundane utility, a lot of people have the same worries that they had about Facebook or just the internet in general where they're worried about misinformation or bad actors using it in ways that harm politics or something. Am I right here that you're an optimist on most of those concerns for people?

[00:27:23] Zvi: Yes. Tyler Cowen, we have a lot of disagreements about the long-term impact, but we agree on most of the short-term impacts. His analogy is the printing press. This is a lot. The printing press and that it greatly enhances our ability to share and process information and to see change. There are going to be people who misuse the printing press. People also talk about, "Look, I typed murder is good actually into a word processor. Oh, no. I called on the phone, and I said a bad word, why isn't somebody doing something about this?" The answer is obviously that's stupid. We understand this. We can deal with it. Yes, it makes these things easier, but that's for humans to solve.

In the short term, this is in fact just another tool, and it's slightly less of just another tool in some senses, but it's mostly just another tool. I'm confident that we can sort this stuff out, and it will emerge stronger. I'm also very optimistic it can help actively with dealing with these problems. If you are getting a bunch of misinformation all around the web and people are telling you crazy stories and people are giving you crazy theories, you can type that stuff into an LLM and ask it whether or not this is made up, whether it's making any sense. That's pretty good.

A lot of times there's a lot of social reasons why you can't just ask other people, or you'll get bad answers if you do because you're asking them about the thing that people were fooled about. I'm pretty optimistic that this is going to be not only handleable, but in many ways an improvement.

[00:28:57] Dan: Yes. Speaking of Tyler Cowen, so you've written before about how him and Marc Andreessen. They're probably like two of the most-- I don't know what the right word is, but strongly intellectual, really are not worried about AI existential risk to the point of sort of saying that, "We shouldn't even be thinking about it at all. We should just turn the dial up all the way to the right and just go maximum fast." You have this post called the Dial Progress where you're arguing that they realize that you can't deal with nuance. You have two options. You either say like, "We're going to go really hard on progress, or we're not going to go hard on progress at all."

You'd make the case for, "Maybe we should introduce a little bit of nuance and introduce some more dials." I have some speculation just personally about their public stance. I'd be curious to get your take on it. Tyler loves being Straussian or maybe if he doesn't love being it, he loves to try to look for the Straussian view in other people's statements. If you think about it and there were zero, what you would call serious thinkers that are going all in on AI acceleration, it was just a anon accounts on Twitter or something. When you have someone serious like Marc Andreessen and Tyler Cowen, who usually are pretty rigorous in their thinking, saying that we should just go all the way up, the worry is the counterfactual in their mind that they don't exist.

Every single smart person who people give credibility to says, "We need to regulate really quick." Then the equilibrium ends up being, "Oh, we just over-regulated it, and we're in a nuclear scenario where we're not realizing the full potential of the technology." By being the two prominent people out there who are saying, "Let's just turn the dial all the way up to the right," it causes people to then, like yourself, make responses to them that are really clear and thought out and introduce more nuance. Maybe they're not saying exactly what they think. I'm curious for your view on this.

[00:30:51] Zvi: There are two very different cases here. For Tyler Cowen, I'm pretty confident in what he actually thinks based on interactions in person and in Zoom calls and emails. Also, him being way too smart to think that he is helping if he holds the other view. I think that you're being a little bit unfair in terms of him saying, "We should charge it all the right and just accelerate maximally," I think he has more nuance than that. That he understands that from his perspective, he has to say, "Yay, progress." He has to say, "Yay, AI development," because he sees the alternative is turning out worse in practice.

He also is trying to push the critics to be better from his perspective, various questions and to take into account various things that he thinks are important and so on. Yes, also, he's a troll, and he's a Straussian, and he actively likes making his readers angry, his words from his conversation of title or summary [unintelligible 00:31:52] I'm not putting words in his mouth because it's him. I think he enjoys it. I think he thinks it's good. He thinks it makes people better thinkers. He thinks people actually figure things out better when they get enraged by people saying things like this. It gives a map into the madness.

I just think that what's going on is also involving a failure to think well about what happens when we get to these future scenarios with very capable artificial intelligences. He just doesn't appreciate what is about to hit him. You see, he's thinking on near-term AI being often very, very good, being very, very grounded, very, very specific. Then him and many, many other economists, in particular, seem to have this thing where then they extrapolate to the future and they go into this reference class mode where they're like, "It's a technological advance that increases human capabilities, and it'll be like all the others and blah, blah, blah, and everything is just going to be normal."

They just don't engage with the actual arguments for why that's not the case in a real way, they dismiss them without really counterarguing with them other than reference class-style arguments in my mind. Marc Andreessen is a very different other case where I don't think I would call him a careful thinker. I think I would think he's a thinker at all. I think I would say he's someone who's willing to express very strong opinion and strongly held. He is definitely someone who jumps on bandwagons without entirely thinking them through, shall we say.

You can look at his of crypto and Web3, both in terms of his investments and his talking of his book to see that he will embrace theses that he has not actually very carefully analyzed. He's going to be very pro-progress, very pro-acceleration in general. Also, he talks his book a lot. He is first and foremost a venture capitalist and a businessman. Also, he's clearly wanting also to be a massive, massive troll. If you've seen his Twitter, he does that.

[00:34:00] Dan: [laughs] Yes.

[00:34:03] Zvi: Also, he has a message. He's hammering it through, and he just doesn't want to hear it when there are alternative messages. If you argue with him and point out where he's wrong, he will often block you. I'm fortunate that that hasn't happened to me, but I've tried to be very careful and nuanced. "Marc, you have my email. One of your partners tried to set us up before this whole thing became a thing. I'm happy to talk and not looking for your money." It should be friendly. Hey, who knows? I really, wish we had a government and a culture that appreciated the thing that people like that are trying to say more generally about everything else.

I would, in fact, make the trade of we accelerate everything. We build our houses. We develop our medicines, we innovate education. We just make life super better for everyone. Also, we play fast and loose with artificial intelligence relative to what I like. I would happily take that trade. I did a bunch of accelerationists who are like," I'm going to take that trade," but you can't make that trade. I'm like, "I can't make that trade. It's not in my power. I'm sorry. It's too bad." I call them the unworried just to be very neutral to not imply anything. They call them self-accelerationists usually, which is the word that if we made it up for them, it would seem like a really nasty thing to do to them, but they're owning it, so it's fine.

[00:35:40] Dan: Yes. It's funny. The original term comes from Nick Land, and it's this dark philosophical idea. Just using it to say, "Let's do LLMs a little bit faster," has always been a little bit funny. [laughs]

[00:35:54] Zvi: It goes way back before that. Accelerationism represents the idea originally, as I understand it, of we should make events progress faster even in directions we don't like because we know that the ultimate outcome is favorable to us. In particular, the old joke where there are two communists on the street, and they see a beggar and one of them moves to give them a slice of bread and the other backs his hand away and says, "Don't delay the revolution."

[00:36:20] Dan: Peter Thiel had a quote a few years ago that I thought I have opinions on it, but I want to get yours. He said that crypto is libertarian and AI was communists. The idea here is like, Oh, when we think about something like the CCP, they're going to get their hands on AI, and they're going to use it for facial recognition, and it's going to help the totalitarian state, and they're going to be able to monitor everything you do. Crypto is this big, libertarian force where we can take away the money from the government and the Fed can't control it anymore. It's going to give all the power to the individual. Seems like it's going in the opposite direction. I don't have two strikes opinion on crypto.

[00:36:58] Zvi: The obvious question I would ask Peter Thiel if he said that in a private conversation, just to probe the intuition is, "Do you think we should avoid building artificial intelligence?"

[00:37:11] Dan: Fair enough.

[00:37:12] Zvi: Because you talk all day [inaudible 00:37:14] about AI are overblown and that we should not be worried about it. He is another unworried person who has many criticisms, I think many of which are deeply, deeply unfair about the people advocating to notice that we might all die and less in the way of object-level explanations for why he thinks we won't. He clearly is not a worried person in this context. There is nothing that Peter Thiel hates more than the the Chinese Communist Party. [unintelligible 00:37:46] communism. This is enemy number one. I don't think it's an unreasonable enemy number one to have.

If you think that AI is communist, you might not want us to be driving the development of artificial intelligence, sir. You might want to speak out about it a lot more than you are. You might want to be funding efforts to not have that happen. I don't see much in the way of that from him. I find it hard to believe he has internalized this thing the way that this is implying. How libertarian is crypto? I think it's a very good question. I think crypto has proven not to be what people thought or expected. It turns out that the useful cryptos are pretty much just becoming centralized under the executive control of governments and everything else and just another means of storing value.

The original promise of all this stuff has proven to be at least highly questionable. That's not what people want. It's become a tiny portion of the value. I do understand the ethos of the people building it is there. They built certain tools that can be used for those purposes, and there's something to it. For AI, certainly, there are people who claim AI is libertarian, and AI wants to be open source, and AI will set us all free. These people are going to get us killed if they enact their agenda. This would be very, very bad.

[00:39:16] Dan: What's the rationale there?

[00:39:18] Zvi: How long an answer do you want?

[00:39:20] Dan: You tell me.

[00:39:22] Zvi: The short version is because AI alignment is an unsolved problem that we do not know how to solve. Even if we did manage to align AI to the wishes of the person who possess the AI, some people will then choose to align their AI to things we very much do not want. We will be unable to control the development of AI and AI capabilities, and we'd be unable to control competitive dynamics between these things. Offense is usually favored over defense in these situations.

The various dynamics of everybody having their own AI will force competitive pressures upon us that will cause us to hand more and more control over to AIs and have AI increasing their capabilities and reward the AIs in proportion to how much that their behaviors cause them to be deployed more in the future and have more access to more resources in the future. Then all of this leads rapidly to human extinction even in the best case scenarios I can think of.

[00:40:18] Dan: I think a lot of folks are familiar just with the overall case for AI doom, why it's very scary to have an intelligence that's smarter than humans exist.

[00:40:27] Zvi: Obviously the very basic structure is just we are about to build smarter things, better optimizers, more capable things, more efficient things, more competitive things than us. How do you think that's going to go? If you think that that is a safe thing to do, I do not know what drugs you are on, but I am deeply, deeply confused why you think that's safe. Ignore all of the technical arguments, ignore all the difficulties of alignment, ignore all of it. That is not a safe thing to do.

How many books and movies and thought experiments and intuitions do you need before you realize that is not a safe thing to do? Arguing that is less than 1% to have humans stop being in control of the future and stop being the dominant force on the planet, you're just not thinking clearly at all, I'm sorry. This just doesn't make any sense, and everybody who tries to do a bunch of math and run a bunch of complicated arguments is just missing the forest from the trees.

[00:41:29] Dan: I'm very on board with that stance. I think what I wanted you to drill into is, what do you consider the difference between everyone having their own AI versus there being some centralized AI? For example, if I access ChatGPT in my web browser, is that me having an AI or do I need to go access Llama and download it on my computer to have an AI? What is the difference there between a company being in control of it and individual humans having their own?

[00:41:59] Zvi: The difference comes from, am I able to restrict how you can modify, how you can instruct, what you can do with your AI, and how you can utilize or expand its capabilities, and what instructions and methods you can use and how you can deploy it in a meaningful way or not? Do I have any control over your decisions with this AI or is this AI fully under your control? Then to what degree are we willing to use that? If everybody has access to their own instantiation of a fine-tuned GPTN, but that comes with no access to the weights, no right to then just arbitrarily tell it what it can and can't do and reasonable effective alignment controls on that system such that if you ask it to murder a bunch of kids, it'll be like, "No."

If you ask it, "How do I build a fusion bomb in my backyard?" It'll be like, "No," and so on, or how do I build a smarter AI than this? Like, "No," et cetera, et cetera. Then it's plausible that because we have a central point of defense where only a handful of actors or even one actor are in control of which queries get passed onto this thing, that we can contain the competitive dynamics, we can contain the destructive aims, we can handle the situation, broadly construed, and I'm simplifying this a lot obviously.

Whereas if everybody has root access, the ability to do their own training on the thing with no checks involved, then you can't put any controls on what people do with it whatsoever. One of the things I repeat over and over again is if you were to release an open source model, you are releasing the completely unaligned except to the user version of that model within two days, period.

Because there are very, very easy ways to fine-tune that model such that you remove all of the alignment. If you give LAMA 2 in its base case, it'll refuse to speak Arabic because it's worried that it's associated with violence, which is itself kind of racist. In the name of trying to be harmless, it's actually deeply, deeply racist, which is funny. If you spend two days, and I think the record is a few hours at this point, fine tuning it, you can get it to answer actual anything that you want and it's no different from Mistral. The system that's designed to just do literally anything that you asked for. If that knowledge is there, if that skill is there, if that ability is there, it's yours.

[00:44:50] Dan: Yes, but you also are an optimist just generally about this mundane utility. Are you fine with these models existing today or what is the GPT number equivalence of open-source where you estimate we need to stop open-sourcing?

[00:45:06] Zvi: I think this is one of those cases where-- one of the things that Connor Leahy likes to talk about is, it's very true. There are only two ways to respond to an exponential, too early and too late. Responding to the exponential exactly on time saying we put the COVID restrictions in on exactly the right day or exactly the right hour is impossible. That's wrong, that's too late. You need to respond too early, and also, there's a pattern here. If we get into the habit of doing this thing, it can be very, very hard to put the brakes on it.

If I could set a open-source hard limit of capabilities permanently, where do I think the ideal number would be? 3 1/2 is probably fine. Probably enough fine that I'm willing to say sure. If I have full control over stopping afterwards. Four, I'm nervous about what you might be able to do with it. Again, we're talking about existentially it's 99 point, it's probably 3. It's 2 to 3.9 safe at this point to make open source GPT-4, but not 4, even now, because you don't know what people can do with it once you release it, because the problem is you're releasing GPT-4 but you're also releasing everything they can do by iterating on GPT-4 to improve it.

You don't really know how many dumb things OpenAI did from the perspective of 10 years from now, that we can then 10 years from now do a lot better. You can't unopen- source GPT-4 once you've done it. Who's to say what we could iterate? The problem is you don't get to stop progress where you put the halt number on. That's why open-sourcing is so dangerous. They can keep going off of what you've already released. They've done the expensive hard part of the work in some important sense already and you don't know where that stops.

I would say I'd rather stop around 3 1/2. I would be only somewhat nervous, is my guess, if we stopped at 4. If we open-source GPT-4.5, I think we have at most 1.9 of we don't all die. We certainly don't have 2, so we should stop pretty soon. I don't want to play with fire on that level.

[00:47:19] Dan: Makes sense. Related question, if we just paused development completely today on both closed source and open-source, do you think that the next 10 years would see a specifically higher GDP growth exclusively due to AI than otherwise? Basically what I'm wondering here is how much mundane utility basically is still hidden that we just haven't actually tapped into yet? Because these things just take a long time for businesses to get ahold of and people to figure out how to use them, and there's all sorts of applications you need to build around it. It's not just a thing that comes out and you have the API and then boom, progress.

[00:48:02] Zvi: The answer is, oh, a lot. Not hard take-off triple digit GDP growth or anything, and probably not double digit GDP growth worldwide or in the US just from this current set of things. If you tell me, okay, GPT-4, as it's currently constituted, is going to be the most capable foundation model for the next 10 years because some genie is going to stop every attempt at a superior training run and it mysteriously just won't work. What happens to GDP? Yes, I expect substantial boosts in GDP growth from the adoption of this technology. We can argue over whether that's on the order of 0.1%, 1% or 10%.

My guess is it's north of 1% and south of 10%, but it's really hard to predict going forward because you don't know the pace of adaptation, you don't know the counterfactual, you don't know what regulations will come down on these things, you don't know how a lot of players will react to them, and you also don't know to what extent the productivity will show up in the GDP statistics, which is always a question. Is the programmer efficiency jumping through the roof showing up in GDP statistics? Not entirely. I think that's a very strange question to answer properly.

I do think this is already a substantial boost in terms of if you were planning your tax rates and your strategies for how to respond to the economic conditions. You should be much more optimistic than you would be if these things weren't happening.

[00:49:41] Dan: Yes, there's a post that is just like catnip to me. I find it really good that you commented on, which is basically the claim that AGI timelines would cause real interest rates to be high. The reason here is interest rates go higher or the real interest rate goes higher when either A, the time discount is high, or B, future growth is

expected to be high. What's interesting about this case is it doesn't really matter if the AGI is aligned or not, because if everyone thinks the world is ending next year, you don't care about saving your money, and then the interest rate goes up. The idea is both scenarios should increase real interest rates. You introduced a lot of nuance, and this is what I definitely agree with, which is the efficient market hypothesis, maybe not the best model to try and inject this idea into. There are several other points about why this didn't make great sense, but my question for you is, do you think that there's some point where this model actually is right, where GPTs 5, 6 comes out, and you say, "Okay, real interest rate has not moved up high. Now I'm willing to make this trade," or do the details here in your post where you list a number of reasons why this isn't a good trade always hold true?

[00:50:57] Zvi: I think if that happened, my response would be, there's a better trade. The trade is, borrow all the money you can borrow at currently considered reasonable interest rates, and invest it in AI. Again, ignore existential risk and the implications that you might increase existential risk, you're just asking for the efficient market question. Clearly, if GPT-5 and 6 come out, and GDP growth accelerates, and interest rates aren't going higher, that means that we're not investing enough in taking advantage of these technologies. There isn't enough money flowing in. There's not enough competition for that. This money will have greatly oversized profits.

I'm just thinking economically only here again, just for emphasis. Of course, interest rates will still be going up in the future as things continue to accelerate, and exponentials happen a bit fast. You could easily have a situation in which the impact of AI in this sense is doubling every-- starts at a year or two, and then the doublings accelerate. That the interest rate market is so big that it takes a while to overwhelm the interest rate market, but then this happens very fast. Once people start to see it happening, they start pricing in that it will happen again in the future for real. Then interest rates go completely nuts.

If you have managed to borrow money, it might not be that different from just keeping the money you borrowed in some important sense, or a large portion of it. That already happened in the last few years. I have a mortgage on my apartment where I pay 2 1/2% fixed rate for 30 years. I could, in an efficient market, sell that back to the bank for $0.60 on the dollar. Even though I am definitely paying my mortgage, just the fact that I can invest in treasuries that earn 5 1/2% means they are taking a huge bath on this. They should very much be willing to buy their way out of it. Of course, for various reasons, including taxes, I have no interest in doing that. Also it's impossible logistically.

The point being, interest rates also have gone up. You also could, in fact, argue that interest rates have gone up. I haven't actually said this out loud, but perhaps this solves the economic mystery of 2023. People are just wrong about this not happening, right? What is happening in 2023? Every economist thought soft lending was going to be extremely difficult. Every economist thought that you would cause a recession if you pushed Fed rates into the 5s this rapidly, like this. What did we get? We got a strong job market. We got strong demand. We got declining inflation, right?

Money is still worth something because there's more goods for that number of dollars to chase. Inflation went down, in some important sense, but we saw the natural rate of interest, in some sense, go up. We've seen interest rates rise without damaging the economy. One could say what's actually happening is artificial intelligence is creating a lot of great investment opportunities and ways to get higher returns on your money and various reasons to prefer money now to money in the future, thus raising interest rates a significant amount. This is happening in the background, but the Fed is also raising their baseline rate of interest.

We didn't notice this is why the interest rates are going up in some sense, because the Fed does dictate the actual interest rates in some important sense by moving last. They see the AI impact and then adjust their base rates. This is the reason we're surviving it without a recession. This is the reason why Biden might get reelected in '24, despite what the Fed is doing, which would otherwise be a huge problem for him, is because AI. The answer to why isn't this happening, and I probably need to write this up now that I'm saying it out loud, perhaps the reason why AI isn't raising interest rates is that it's raising interest rates.

[00:55:12] Dan: It's that it's actually doing it.

[00:55:14] Zvi: It already happened.

[00:55:15] Dan: Yes, actually, it did. These things are just so complex, though, right, because my counter to it, and I was actually-- I once read a post several years ago that I thought was really interesting, which is, why are interest rates so historically low? This is obviously not relevant anymore, but it actually could apply to AGI, the conclusion that this guy drew. He said, because maybe the market is pricing in that we cure aging, so now the time preference is completely flipped, people don't care about having their money now, they're going to live forever. You've got to also make the case that AGI, maybe that's not the specific thing that it does, but it would probably do some pretty weird things, so to just take it--

[00:55:56] Zvi: I would just cut us off and say that's not how traders think, not enough of them. There's no freaking way.

[00:56:04] Dan: Well, that's the more macro point that you make, which is, yes, even that paper that is saying AGI should increase interest, even that is stretching how people that are trading for a living are probably actually doing on a day to day basis.

[00:56:19] Zvi: It's entirely possible that AI is having the impact now because what's happened now has, in fact, awoken them to enough mundane utility-style advancements and accelerations that it's affecting interest rates now. I think that's pretty plausible now that I say it out loud. I don't think that they're pricing in the long-term stuff very well, and they're certainly not pricing in stuff like curing aging. Nobody thinks that way. Nobody's ever think that way. Markets are much more myopic than that, always have been.

One of my favorite anecdotal data points is you look at what happened to the stock market and other financial indicators during the Cuban Missile Crisis, when the future got very, very different, and things basically didn't move very much. Everyone just sort of acted like, "Oh, I'm sure it'll work itself out somehow. If we all go together then we go," or something, so don't worry about it and just trade as if the missiles aren't going to fly almost entirely. Things moved by a few percent at most, even though the president's going around saying the chances of nuclear war are between 1 in 3 and 1 in 2 in the next two weeks. Everybody was paying attention to that full stop, so why should we expect people to respond to AI in these huge ways? It doesn't make any sense.

[00:57:31] Dan: Yes, that is true. If you're not around here to see tomorrow, why care about selling now? You might as well just take the optimistic bet.

[00:57:40] Zvi: Yes, and just generally, people have this very much normalcy bias. Even if you look at media, when people predict how people were going to do these things, the world is going to end in a week, and most of the people just go about their day. There's not much to be done. You don't get much benefit from uprooting everything. Life is what life is, and you can do things on the margin, but mostly you're better off pretending the world isn't going to end until it suddenly ends from a just experiential perspective than suddenly blowing it all on hookers and beer, right? It's just not a very good way to be happy or produce anything useful.

[00:58:23] Dan: Yes. I saw a tweet a day or two ago that basically said the GPT model, Turbo Instruct, can play chess at a level of about 1,800, and it's been previously reported that GPT actually cannot play chess, but it looks like it was just the reinforcement learning models that are bad at it. The implication here would be basically that reinforcement learning actually hurts reasoning abilities. What's going on here? Do you think this is just a specific case of reinforcement learning happened to hurt chess reasoning capabilities, or do you think there's a broader lobotomizing that's going on?

[00:59:00] Zvi: There are other considerations, one of which is that someone at some point checked an evaluation of ChatGPT's Instruct model into their GitHub that checks chess. We should not put it past them to have somehow fine-tuned its ability to play chess so that it suddenly can play chess much better. We can't rule that out. We don't know. It's possible that this is a much more straightforward case, but we definitely see a degradation of reasoning abilities, and I write about this in the next column somewhat already, based on ROHF.

In general, whenever you train a model to do something despite it not making any logical sense, the model is going to learn not to make as much logical sense. to not value logical sense as much is going to impair its ability to think. I would point out here that this is

also true in humans. To just jump to the thing that I think people really should notice is, if you tell me I have to go around the world pretending things are true that aren't true, pretending not to notice correlations that are in fact accurate correlations, and living in a world in which logic just breaks down in weird ways constantly, I am not going to be able to turn that logical impairment entirely off when I move to other realms where you think I should be fine. We should beware when asking for this kind of special pleading on various issues of the damage that we are doing.

[01:00:42] Dan: Got it. You've written before about the difference between this hardcore rationalist updating, like take your priors and update based on new information versus what I think the commenter that you were referencing called the Ilya or Steve Jobs mentality of relentless optimism. Rationalists believe that you should consciously update your beliefs in the face of new information. Just quit and admit when you're wrong. Ilya or Steve Jobs though, they're way more focused on just like, solve the problem at all costs. We're just going to figure it out until it's right. I think your commentary was like, maybe we're just actually, like both of these people, are being rational and it's more a matter of vocabulary in the way that they are expressing the way that they're attempting to solve problems. Clearly to me, it seems like there is-- people tend to have one mentality or the other. My question to you is, which is better for getting things done?

[01:01:36] Zvi: I was a startup founder and a hardcore rationalist at the same time, where I was simultaneously holding in my head both beliefs. I had the belief of obviously we'll probably fail utterly. This is not going so great. In fact, it was not for the most part going so great. It did not end up going so great when all things were said and done, and simultaneously, the relentless optimism of, but we should act as if it's going to work so that we make good decisions that cause it to work in some sense.

What's going on with Ilya and Steve Jobs could be thought of as this brain hack of, I know, for the purposes where I need to know it, that this is not guaranteed to work, but I know that acting the way I would if I assumed it was going to work will cause me to make decisions that make it more likely to work and more likely to have better results. I will do that while having this other process in my head that keeps an eye on when it's going to do something that's actually crazy because the assumption isn't true and stop it from happening.

A person like Steve Jobs, you don't actually see them doing things like borrowing from mobsters who will shoot them if the prototype doesn't work, because we know it's going to work, right? No, Steve Jobs realizes that's dumb. Ilya realizes that's dumb. Metaphorically, things that are less blatant than that, but that they don't set themselves up such that if the thing doesn't work, disasters happen. They just say, "Oh, this is going to work. Let's do the thing that will cause it to work," and use this to work their long days and drive everybody and keep everybody's morale up and figure out what the right ideas are and so on.

When exhibited that way, I would say they've found a way to make the hybrid work for them in that sense. I use it too in my own way. I emphasize more of when you talk to me, you'll get the rationalist words outputting out of my mouth more often when I'm doing the hybrid than them. In fact, I think they believe in more of the act as if it's just going to work thing. I don't see a particular acting for real as if their probability is 99%.

If you thought your probability of succeeding was 9%, you wouldn't get out of bed in the morning. The odds say it's definitely worth it. Let's do this. If you have to say to yourself it's 90, then sure. Don't say it's 99.9, not where it counts.

[01:04:17] Dan: As a startup founder and someone who writes like crazy prolifically, what is the Zvi Mowshowitz production function? How do you get so much done?

[01:04:26] Zvi: Part of it is just practice, practice, practice, certainly. I would say I am constantly working in the sense of like every other creative, I'm constantly looking for inputs. I'm constantly thinking about what something implies, what do I have to say about it. The basic workflow is I have three giant monitors because I found that a few very, very large monitors are better than a lot of small monitors. I have a lot of open tabs and a lot of open tab groups. I'm continuously scanning for various-- I'm scanning various RSS feeds in Twitter, in my inbox, and other sources for new information and news sources.

When I find the news source, I put it in the appropriate group, or if it's like, "I know exactly what to do with it," and I have the time right now, I'll just put it in immediately. I then organize these things logically as I go. Then sometimes I'll go over them and I'll write more or I'll edit or whatever. Then periodically I'll say here's a compilation of the things on this topic or these related topics that I've had. Then I'll edit it and organize it into a unified whole and I'll put it out there. Then every now and then, I wish I did this more, but it's hard to do. You'll find a concrete isolated thing and you'll write that up in more detail and you'll push that through, and it'll have more coherence and hopefully stand the test of time more.

Mostly it's just relentlessly being able to break up-- Part of it is just being able to hold this stuff in your head enough that you can reference it and then break it up into chunks so that the moment I-- I have just a list of it [unintelligible 01:05:58] I'm like, "Oh, I know where that goes. I know what this relates to this week. I know how to add this, integrate this in, and how this impacts the other things I was saying and how I need to move other stuff around based on that." You just slowly build up a superstructure that way. A lot of it is just you iterate. I had years of doing it for COVID. That transferred mostly pretty cleanly.

[01:06:19] Dan: At what point do you think LLMs will be a significant input to that?

[01:06:24] Zvi: They're a significant input already in the sense that when I want to answer certain types of questions or learn certain types of things, I will use LLMs rather than use Google or asking a person because it's faster and more efficient. Often I will check intuitions using LLMs. I should be using LLMs more for things like grammar and other things like that, but I haven't figured out how the workflow is worthwhile there just given how I happen to work. In particular, I use LLMs often for asking questions about papers that are clearly not worth actually reading, because you can't stop and read every 40-page paper that comes across your desk. You just don't have the time.

You can do things like control F for the words extinction and existential, but that's really not a good check of whether they're saying things about that. You can ask Claude and Claude will be pretty good at identifying whether or not they [unintelligible 01:07:21] If you ask it, does this paper address this topic at all? That's the kind of question it's very, very good at answering the question and pointing you to vaguely where it does that. Then if it does, you read the section. If it doesn't, you say, "Okay, it doesn't," or ask how does it do this particular technique? It's a much better search function than control F, as long as you then read the text afterwards. Again, it's definitely a boost. If you're asking when the LLM can write the damn thing, we're not close. We're nowhere close. It's not obvious we get there before the end.

[01:08:01] Dan: What version of GPT along with fine-tuning and saying, here's every news source that I am likely to look at. Here's the last 200 AI newsletters that I've read. I want you to search all these news sources, aggregate them into similar categories, and write the newsletter. You think that's actually not till AGI or do you think [crosstalk]

[01:08:19] Zvi: Oh, I was talking about actually writing the words, actually doing the analysis, creating the outputs, I'm not sure that's not AI complete.

[01:08:28] Dan: Got it. Yes.

[01:08:30] Zvi: The thing you described is probably doable now in a useful way. The problem is that it takes a very high quality threshold before that is better than nothing. You have this problem of it's very hard to hill climb on it. If I had a version that was like, this is useful, but could be more useful if it was better, but it's worth using now, or at least not too bad to use now. It's not as efficient maybe, but it's definitely serving a purpose, then I could hill climb on that and iterate and get to where I want, as opposed to, no, I have to put in a bunch of programming work while it's terrible. I still have to check all my usable sources anyway before it gets to the point.

I always use chronological feeds on all of my social media, and RSS and so on, I don't use any kind of AI style filters, except very, very crude, like stop spamming me style filters at most, precisely because a lousy filter just means you need to check the damn unfiltered version anyway. It accomplishes nothing. We're not there yet. I could probably be well-served by having an LLM scour the rest of the internet for highly plausibly contextually important things and having it present them to me, especially if it also checks for redundancy with my current feeds. Again, that requires a bunch of coding work and a bunch of iteration and a bunch of time

upfront investment. I at least haven't chosen to do that myself. I'm accepting grants if somebody really wants to supercharge things and wants to bump me up to the point where I can just hire engineers, but that's definitely not in my price range right now.

[01:10:16] Dan: I've had Robin Hanson on this show before, and his views come off quite strange to someone who's not initiated to his style of thinking. He's very okay with this idea that humans will merge with AI and that our descendants will be totally unrecognizable from us today. He also thinks that's totally fine. He doesn't have any moral qualms about it whatsoever. It seems to me like that's an extreme case, but it feels like at some point all alignment questions end up boiling down to political or moral arguments about what a good future actually looks like and these really deep questions about, what does it mean to live a meaningful life for humans? Do you agree that that's true? Is there a way where we can live in a more pluralistic world where we do today?

[01:11:04] Zvi: Several things to address there. The first thing is, I think he would agree with me strongly that when we say merge with AI, that's almost always people talking nonsense. It's like in The Mass Effect 3 ending where you merge. You're saying words, but it doesn't really mean anything. You have no idea what you're saying or how that translates or what that would operationalize as. Mostly you're just imagining something that narratively you vibey want to happen, but that isn't a thing. I still think it's the correct ending to choose because the other endings definitely kill you. Regardless of the extended version where they don't kill you, or at least don't kill you immediately, it's just not actually accurate to think that way. It's like, okay, this was meant to be something smarter than it technically is. Maybe it could be something, but the others are [unintelligible 01:11:54] Anyway, the way I would describe Robin Hanson's position is, yes, there are not going to be any humans. There's not going to be anything that resembles a human all that much, and that's okay. I'm highly in favor of moving towards this goal. If you believe that is what's going to happen by default, then yes, a huge portion of what you decide comes down to the question of, but that's good actually. Do we agree with this?

He thinks we'll be able to leave legacies through this artificial intelligence. That meaning if we reflect things that we are or cared about, I don't think that's true in the Hansonian style scenarios. If you get the scenario Hanson's envisioning, which I think is plausible. I think it's very plausible that we'll end up in a Hansonian future if we don't do something about it. I think that's bad. That the legacies that we're talking about, things like, oh, we kept a lot of aspects of the Linux kernel because it was just easier to build off of what we had rather than starting from scratch.

That does not have any value to me. I don't feel like our legacy has been preserved because we kept some of the Linux kernel or anything of that nature or anything else I can think of that would survive the evolutionary competitive pressures that Hanson is imagining would happen to the AIs. It would cause them to diverge so much from us. Yes, I think you-- I would say, does it boil down, your central question, does it boil down to value questions? I think the answer is not entirely, but they're important. We have these strong disagreements about what actions, including the default case, with what probabilities lead to what types of outcomes?

What futures are we headed towards? What are the possibilities open to us? How do we steer between those possibilities? Are all questions that are-- well, hell if we know, right? To a large extent, we're trying to figure it out. We have our best guesses. We disagree about that. A lot of the disagreements are very, very intelligent, genuine with good points on both sides. Then there are also the vast group-- while simultaneously the vast majority of people have views that have not actually thought these things through and are very ill-considered or completely disingenuous.

Among people who have considered it, the actual ways to steer this towards different things, those are tricky. Knowing what you can do to steer towards various outcomes is tricky. In fact, we have the question of which intermediate steps lead to what type of lived experiences and outcomes and what things would be present in that future universe over what timeframe? Very good questions. Now, if you could agree on all those questions, somehow, you would then have this boiled down to a moral issue. You would then be able to say, okay, Zvi and Robin agree on what would happen given what decisions by humanity. Humanity is here to make a decision based on that.

We have to decide if we should go with Zvi's world, which has in it much less optimized intelligences and has the following potential vulnerabilities and long-term disequilibria and other things, but he thinks it provides a lot more value, or Robin's world, which has these other things that Robin thinks provide a lot more value, which world should we choose? We could have a debate and make a decision, for some value of we. We're not there yet. We're nowhere near the galaxy. We're still very much in the we don't know what's going to happen. I think that it is an important question to sort out.

Do you think that AI is getting control of the future and there no longer being humans, us not having great grandchildren is good, actually, compared to the alternative? There are people for several different reasons who say yes. There are the antinatalists, negative utilitarians who think that humanity just suffers. The ultimate Buddhists in some broad metaphorical sense. You have the people who think that the AI has more value than we do because it's more intelligent. Then you have the people who think that evolution is the thing that matters or is valuable, and who are we to interfere with it or something like that.

You have the people who think, well, I care about myself and my own short-term future. I don't care what happens in the long run. If this allows me to enjoy the next 10 years of cool AIs in it, then I don't care. You have the people who think that right now humans have value, but the humanity that holds back singularity would be so crippled and morally broken and experientially broken and disheartened. Some combination of these things, that it wouldn't have value or have a negative value, and therefore we're better off just letting it take its course in some sense.

You have the people who value nature and think that humans are bad for nature and that AI would be good for nature. They are wrong, both because they're wrong about what's valuable and they're wrong that AI would preserve nature. It won't. It will wipe it out so much more efficiently than humans ever could. If we have these scenarios, there's several other things of that ilk. Then within the people who want to preserve humanity, there are those who want to preserve humanity in order to do certain specific things or for certain ways of life or whatever, and people who want it to be so that people can do whatever they want in some important sense.

There are people who-- but they don't know. Who think like, "Oh, we want humans to flourish. We want humans to enjoy vibrant, interesting, complex lives." We don't know exactly how to do that. We're punting that until we do, because that's a really, really hard problem. We've been having this conversation since long before. I'm largely in that camp. When I ask myself what I value, I have a lot of meaningful intuition pumps and things to say about that. I don't pretend to have all the answers.

I do think it's much easier to tell things you don't value and you don't want than to know exactly what it is you do want. That we'd be in much better shape if we could specify what human values were and what we actually cared about in a way that we could have it be interpreted by an AI or by another human. I don't think we're there yet at all. I think that'd be my response.

[01:18:28] Dan: Zvi, this has been an awesome conversation. Thank you so much for coming on the show. Really enjoyed it.

[01:18:33] Zvi: Thank you. I had a good time. You ask different questions. I always want people who ask different questions.