Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript

Tyler Cowen - The #1 Bottleneck to AI progress Is Humans

Why he thinks AI won't drive explosive economic growth

I interviewed Tyler Cowen at the Progress Conference 2024. As always, I had a blast. This is my fourth interview with him – and yet I’m always hearing new stuff.

We talked about why he thinks AI won't drive explosive economic growth, the real bottlenecks on world progress, him now writing for AIs instead of humans, and the difficult relationship between being cultured and fostering growth – among many other things in the full episode.

Thanks to the Roots of Progress Institute (with special thanks to Jason Crawford and Heike Larson) for such a wonderful conference, and to FreeThink for the videography.

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.

Sponsors

I’m grateful to Tyler for volunteering to say a few words about Jane Street. It's the first time that a guest has participated in the sponsorship. I hope you can see why Tyler and I think so highly of Jane Street. To learn more about their open roles, go to janestreet.com/dwarkesh.

Timestamps

(00:00:00) Economic Growth and AI

(00:14:57) Founder Mode and increasing variance

(00:29:31) Effective Altruism and Progress Studies

(00:33:05) What AI changes for Tyler

(00:44:57) The slow diffusion of innovation

(00:49:53) Stalin's library

(00:52:19) DC vs SF vs EU

Transcript

Dwarkesh Patel 00:00:07

Tyler, welcome.

Tyler Cowen 00:00:08

Dwarkesh, great to be chatting with you.

Dwarkesh Patel 00:00:11

Why won't we have explosive economic growth, 20% plus, because of AI?

Tyler Cowen 00:00:17

It's very hard to get explosive economic growth for any reason, AI or not. One problem is that some parts of your economy grow very rapidly, and then you get a cost disease in the other parts of your economy that, for instance, can't use AI very well.

Look at the US economy. These numbers are guesses, but government consumption is what, 18%? Healthcare is almost 20%. I'm guessing education is 6 to 7%. The nonprofit sector, I'm not sure the number, but you add it all up, that's half of the economy right there.

How well are they going to use AI? Is failure to use AI going to cause them to just immediately disappear and be replaced? No, that will take, say, 30 years. So you'll have some sectors of the economy, less regulated, where it happens very quickly. But that only gets you a modest boost in growth rates, not anything like the whole economy grows 40% a year.

Dwarkesh Patel 00:01:04

The mechanism behind cost disease is that there's a limited amount of laborers, and if there's one high productivity sector, then wages everywhere have to go up. So your barber also has to earn twice the wages or something. With AI, you can just have every barbershop with 1,000 times the workers, every restaurant with 1,000 times the workers, not just Google. So why would the cost disease mechanism still work here?

Tyler Cowen 00:01:25

Cost disease is more general than that. Let's say you have a bunch of factors of production, say five of them. Now, all of a sudden, we get a lot more intelligence, which has already been happening, to be clear.

Well, that just means the other constraints in your system become a lot more binding, that the marginal importance of those goes up, and the marginal value of more and more IQ or intelligence goes down. So that also is self-limiting on growth, and the cost disease is just one particular instantiation of that more general problem that we illustrate with talk about barbers and string quartets.

Dwarkesh Patel 00:01:57

If you were talking to a farmer in 2000 BC, and you told them that growth rates would 10x, 100x, you'd have 2% economic growth after the Industrial Revolution, and then he started talking about bottlenecks, what do you say to him in retrospect?

Tyler Cowen 00:02:11

He and I would agree, I hope. I think I would tell him, "Hey, it's going to take a long time." And he'd say, "Hmm, I don't see it happening yet. I think it's going to take a long time." And we'd shake hands and walk off into the sunset. And then I'd eat some of his rice or wheat or whatever, and that would be awesome.

Dwarkesh Patel 00:02:29

But the idea that you can have a rapid acceleration in growth rates and that bottlenecks don't just eat it away, you could agree with that, right?

Tyler Cowen 00:02:38

I don't know what the word "could" means. So I would say this: You look at market data, say real interest rates, stock prices, right now everything looks so normal, startlingly normal, even apart from AI. So what you'd call prediction markets are not forecasting super rapid growth anytime soon.

If you look at what experts on economic growth rate... We had Chad Jones here yesterday. He's not predicting super rapid growth, though he thinks AI might well accelerate rates of growth. So the experts and the markets agree. Who am I to say different from the experts?

Dwarkesh Patel 00:03:09

You're an expert.

Tyler Cowen 00:03:11

I'm with another expert.

Dwarkesh Patel 00:03:13

In his talk yesterday, Chad Jones said that the main variable, the main input into his model for growth, is just population. If you have a doubling, an order of magnitude increase in the population, you plug that number in in his model, you get explosive economic growth.

Tyler Cowen 00:03:26

I don't agree.

Dwarkesh Patel 00:03:27

Why not buy the models?

Tyler Cowen 00:03:28

His model is far too much a one-factor model, right? Population. I don't think it's very predictive. We've had big increases in effective world population in terms of purchasing power. A lot of different areas have not become more innovative. Until the last, say, four years, most of them became less innovative.

So it's really about the quality of your best people or institutions, as you and Patrick were discussing last night. And there it's unclear what's happened, but it's also fragile. There's the perspective of the economist, but also that of the anthropologist, the sociologist.

They all matter. But I think the more you stack different pluralistic perspectives, the harder it is to see that there's any simple lever you can push on, intelligence or not, that's going to give you breakaway economic growth.

Dwarkesh Patel 00:04:11

What you just said, where you're bottlenecked by your best people, seems to contradict what you were saying in your initial answer, that even if you boost the best parts, you're going to be bottlenecked by the restaurants.

Tyler Cowen 00:04:20

You're one of our best people, right? You're frustrated by all kinds of things.

Dwarkesh Patel 00:04:25

I think I'm gonna be making a lot more podcasts after AGI.

Tyler Cowen 00:04:28

Okay, good. I'll listen. I'll be bottlenecked by time.

Here's a simple way to put it. Most of sub-Saharan Africa still does not have reliable clean water. The intelligence required for that is not scarce. We cannot so readily do it.

We are more in that position than we might like to think, but along other variables. And taking advantage of the intelligence from strong AI is one of those.

Dwarkesh Patel 00:04:53

So about a year ago, your co-writer on Martial Revolution, Alex Tabarrok, had a post about the extreme scarcity of high-IQ workers. And so if the labor force in the United States is 164 million people, if one in a thousand of them are geniuses, you have 164,000 geniuses. That's why you have to do semiconductors in Taiwan, because that's where they're putting their nominal amount of geniuses. We're putting ours in finance and tech.

If you look at that framework, we have a thousand times more of those kinds of people. The bottlenecks are going to eat all that away? If you ask any one of these people, if you had a thousand times more of your best colleague, your best coworker, your best co-founder, the bottlenecks are going to eat all that away? Your organization isn't going to grow any faster?

Tyler Cowen 00:05:32

I didn't agree with that post. If you look at labor market data, the returns to IQ as it translates into wages, they're amazingly low. They're pretty insignificant.

People who are very successful, they're very smart, but they're people who have say eight or nine areas where they're like, on a scale of 1 to 10, there are nine. Like they have one area where they're just like an 11 and a half on a scale of 1 to 10. And then on everything else, they're an eight to a nine and have a lot of determination.

And that's what leads to incredible success. And IQ is one of those things, but it's not actually that important. It's the bundle, and the bundles are scarce. And then the bundles interacting with the rest of the world.

Like just try going to a mid-tier state university and sit down with the committee designed to develop a plan for using artificial intelligence in the curriculum. And then come back to me and tell me how that went, and then we'll talk about bottlenecks. They will write a report. The report will sound like GPT-4, and we'll have the report. The report will not be bottlenecked, I promise you.

Dwarkesh Patel 00:06:38

These other traits, look, the AIs, it's conscientiousness, if it's pliability, whatever. The AIs will be even more conscientious. They work 24/7. If you need to be deferential to the FDA, they'll write the best report the FDA has ever seen. They'll get things going along. With these other traits, they're not going to be bottlenecked by them.

Tyler Cowen 00:06:54

They'll be smart and they'll be conscientious, that I strongly believe. I think they will boost the rate of economic growth by something like half a percentage point a year. Over 30-40 years, that's an enormous difference. It will transform the entire world.

But in any given year, we won't so much notice it. A lot of it is something like a drug that might have taken 20 years now will come in 10 years. But at the end of it all is still our system of clinical trials and regulation.

If everything that took 20 years takes 10 years, over time, that's an immense difference. But you don't quite feel it as so revolutionary for a long time.

Dwarkesh Patel 00:07:27

The whole vibe of this progress studies thing is, look, we've got all these low-hanging fruits or medium-hanging fruits that if we fix our institutions, if we made these changes to regulations, to institutions, we could rapidly boost the rate of economic growth. You're okay, so we can fix the NIH and get increases in economic growth. But we have a billion extra people, 10 billion extra people, the smartest people, the most conscientious people, and that has an iota of difference of economic growth.

Isn't there a contradiction between how much that rate of economic growth can increase between these two perspectives?

Tyler Cowen 00:07:58

There's diminishing marginal returns to most of these factors. A simple one is how it interacts with regulation, law, and the government. Another huge one is energy usage.

How good is our country in particular at expanding energy supply? I've seen a few encouraging signs lately with nuclear power. That's great. Most places won't do it.

And even those reports, exactly how many years it will take, I know what the press releases say. We'll see, you know, it could be 10 years or more. That will just be a smidgen of what we'll need to implement the kind of vision you're describing.

So yeah, they're going to be bottlenecks all along the way, the whole way. It's going to be a tough slog, like the printing press, like electricity. The people who study diffusion of new technologies never think there will be rapid takeoff.

My view is I'm always siding with the experts. Economists, social scientists, most of them are blind and asleep to the promise of strong AI. They're just out to lunch. I think they're wrong. I trust the AI experts.

But when you talk about, say, diffusion of new technologies, the people who do AI are basically totally wrong. The people who study that issue, I trust the experts. If you put together the two views where in each area you trust the experts, then you get my view, which is amazing in the long run, will take a long time, tough slog, all these bottlenecks in the short run.

The fact that there's like a billion of your GPT whatevers, which I'm all in love with, I promise you it's going to take a while.

Dwarkesh Patel 00:09:26

What would the experts say if you said, look, we're going to have, forget about AI, because I feel like when people hear AI, they think of GPT-4, not the humans, not the things that are going to be as smart as humans. What would the experts say if you said tomorrow the world population, the labor force, is going to double? What impact would that have?

Tyler Cowen 00:09:41

Well, what's the variable I'm trying to predict? If you mean energy usage, that's going to go up, right? Over time, it's probably going to double.

Dwarkesh Patel 00:09:51

Growth rate, I'm not sure it'd be a noticeable difference. Doubling the world population?

Tyler Cowen 00:09:52

Yeah, I'm not sure. I don't think the Romer model has been validated by the data. And I don't agree with the Chad Jones model, much as I love him as an economist. I don't think it's that predictive.

Look at artistic production in Renaissance Florence. There are what, 60,000 people in the city, the surrounding countryside. But it's that so many things went right at the top level that it was so amazing in terms of still value added today.

The numbers model doesn't predict very well.

Dwarkesh Patel 00:10:19

The world economy today is some hundred trillion something. If the world population was one-tenth of what it is now, if we only had 1 billion people, 100 million people, you think we could have the world economy at this level with our level of technology?

Tyler Cowen 00:10:31

No. The delta is a killer, right? This is one thing we learned from macro. The delta and the levels really interact.

So shrinking can kill you. Just like companies, nonprofits, if they shrink too much, often they just blow up and disappear. They implode. But that doesn't mean that growing them gets you 3x, 4x, whatever, proportional to how they grow.

It's oddly asymmetric. It's very hard to internalize emotionally that intuition in your understanding of the real world. But I think we need to.

Dwarkesh Patel 00:11:00

What are the specific bottlenecks? Like, why?

Tyler Cowen 00:11:04

Humans. Here they are. Bottleneck, bottleneck. Hi, good to see you. And some of you are terrified. You're going to be even bigger bottlenecks.

That's fine. It's part of free speech. But my goodness, once it starts changing what the world looks like, there will be much more opposition. Not necessarily from what I call doomsday grounds, but just people like, hey, I see this has benefits, but I grew up, trained my kids to live in some other kind of world. I don't want this.

And that's going to be a massive fight. I really have no prediction as to how it's going to go. But I promise you, that will be a bottleneck.

Dwarkesh Patel 00:11:40

But you can see even historically, you don't have to go from the farmers to industrial revolution 10x. You can just look at actually cases in history where we have had 10x rates, sorry, 10% increase rates of economic growth. You go to China after Deng Xiaoping, they have decades of 10% economic growth. And then that just because you can do some sort of catch up.

The idea that you can't replicate that with AI or that it's not infeasible. Where were the bottlenecks when Deng Xiaoping took over?

Tyler Cowen 00:12:09

They're in a mess now. I'm not sure how it's going to go for them. They're just a middle-income country. They struggled to tie per capita income with Mexico.

I think they're a little ahead of Mexico now. They're the least successful Chinese society in part because of their scale. Their scale is one of their big problems.

There's this fear that if they democratize and try to become a normal country, that the median voter won't protect the interests of the elites. So I think they're a great example of how hard it is for them to scale because they're the poorest group of Chinese people on the planet.

Dwarkesh Patel 00:12:40

I mean not the challenges now, but the fact that for decades they did have 10% economic growth, in some years 15%.

Tyler Cowen 00:12:45

Well, starting from a per capita income of like $200 per head.

Dwarkesh Patel 00:12:49

And now they're ancestors were going to be like as poor as the Chinese, you know, like 30 years ago.

Tyler Cowen 00:12:55

I'm very impressed by the Industrial Revolution. Like you could argue progress, or all progress studies here, most important event in human history, maybe. Typical rate of economic growth during that period was about 1 1/2 percent.

And the future is about compounding and sticking with it and, you know, seeing things pay off in the long run. Just human beings are not going to change that much. And I don't think that property of our world will change that much, even with a lot more IQ and conscientiousness.

Dwarkesh Patel 00:13:21

I interviewed you nine months ago, and I was asking you about AI then. I think your attitude was like, "Eh." How has your attitude changed since we talked nine months ago?

Tyler Cowen 00:13:34

I don't remember what I thought in what month, but I would say on the whole, I see more potential in AI than I did a year ago. I think it has made progress more quickly than I had been expecting, and I was pretty bullish on it back then.

The 01 model, to me, is very impressive. I think further extensions in that direction will make a big, big difference. The rate at which they come is hard to say, but it's something we have, and we just have to make it better.

Dwarkesh Patel 00:14:01

You showed me your document of different questions that you came up with for 01 for economic reasoning.

Tyler Cowen 00:14:06

I don't think I used GPT-4 for those.

Dwarkesh Patel 00:14:07

Okay, but what percentage of them did 01 get right? Because I don't think I got a single one of those right.

Tyler Cowen 00:14:12

Those questions were too easy. They were for GPT-4, so I abandoned those questions.

You know, a hundred questions of economics. How well does a human do on them? They're hard, but it's pointless. I would not be shocked if somebody's AI model, in less than three years, beat human experts on a regular basis. Let's put it that way.

Dwarkesh Patel 00:14:38

Did that update you in any way, that now you've resigned on these questions because they were too easy for these models? And in the initial, like, they are hard questions objectively, right? They're just easy for 01.

Tyler Cowen 00:14:49

I feel like Kasparov the first time he met Deep Blue. There were two matches, and in the first one, Kasparov won. I lived through that first match.

I feel like I'm sort of in the midst of the first match right now, but I also remember the second match. In the final game, Kasparov made that bonehead error in the Caro-Kann defense. That, too, was a human bottleneck, and he lost the whole match. So we'll see what the rate of change is.

Dwarkesh Patel 00:15:15

Yesterday, Patrick was talking about how important it is for the founders of different institutions to hang around and be the ones in charge. I've heard you talk about, you know, the Beatles were great because the Beatles were running the Beatles. Why do you think it's so important for that to be the case?

Tyler Cowen 00:15:32

I think courage is a very scarce input in a lot of decisions. Founders have courage to begin with, but they also need less courage to see through a big change in what the company will do.

Facebook, now Meta, has made quite a few big changes in its history. Mark had a lot of courage to begin with. But if Mark Zuckerberg says, "We're going to do this, we're going to do that," it's pretty hard for everyone else to say no in a good way. I really like that. It economizes on courage, having a founder, and you're selecting for courage. Those would be two reasons.

Dwarkesh Patel 00:16:08

How does that explain the Beatles' success?

Tyler Cowen 00:16:10

Well, the Beatles are an interesting example. I mean, they broke up in 1970, right? The Rolling Stones are still going. That tells you something, but the Beatles created much greater value.

The Beatles are the group we still all talk about much more, even though the Rolling Stones are still with us. They were always unstable. There are two periods of the Beatles: early Beatles, John is the leader. But then Paul works at it, and John becomes a heroin addict. Paul gets better and better and better, and ultimately there's no core. There's not a stable equilibrium.

The Beatles split up, but that creative tension for those core seven to eight years was just unbelievable. And it's four founders -- Ringo, not quite a founder, but basically a founder because Pete Best was crummy, and they got rid of him right away.

It's one of the most amazing stories in the world. I like studying these amazing productivity stories like Johann Sebastian Bach, Magnus Carlsen, Steph Curry, the Beatles -- I think they're worth a lot of study. They're atypical. You can't just say, "Oh, I'm going to be like the Beatles." Like, you're going to fail. The Beatles did that. But nonetheless, I think it's a good place to look for getting ideas and seeing risks.

Dwarkesh Patel 00:17:15

What did you think of Patrick's observation of the competency crisis?

Tyler Cowen 00:17:19

I see it differently from Patrick, and he and I have discussed this. I think there's basically increasing variance in the distribution.

Young people at the top are doing much better, and they're far more impressive than they were in earlier times. If you look at areas where we measure performance of the young -- chess is a simple example; we perfectly measure performance -- very young people are just better and better at chess. That's proven. Even in NBA basketball, you have very young people doing things that they would not have been doing, say, 30 years ago. A lot of that is mental and training and application and not being a knucklehead. So the top of the distribution is getting much better.

You see this also in science, Internet writing. The very bottom of the distribution -- well, youth crime has been falling since the 90s, so the very bottom of the distribution also is getting better. I think there's some thick middle above the very bottom and extending a bit above the median. That's clearly getting worse.

Because they're getting worse, there are a lot of anecdotal examples of them getting worse, like students wanting more time to take the test or having flimsy excuses or mental health problems with the young or whatever. It's a lot more of that because of that thick band of people getting worse, and that's a great concern. But I see the very bottom and a big chunk of the top of the distribution as just much better. I think it's pretty proven by numbers that that's the case. I would say this increasing variance is a weird mix of where the gains and declines are showing up.

I've said this to Patrick, and I'm going to say it to him again, and I hope I can convince him.

Dwarkesh Patel 00:18:53

It seems concerning then, the composition, that the average goes down. If you look at PISA scores or something.

Tyler Cowen 00:18:58

The median goes down. A lot of tests, they've pushed more people into taking the test -- PISA scores in particular.

I suspect those scores, adjusted for that, are roughly constant, which is still not great. I agree. I think there's some decline.

Some of it is pandemic, and we're recovering a bit slowly, getting back to human bottlenecks. But I think a lot of the talk of declining test scores is somewhat overblown. At most, there's a very modest decline, I would say.

Dwarkesh Patel 00:19:25

If the top is getting better, what do you make of the anecdotal data he was talking about yesterday where the Stanford kids come up to him and say, "All my friends, they're stupid. You can't hire anybody from Stanford anymore." That should be the cream of the crop, right?

Tyler Cowen 00:19:37

There's plenty of data on the earnings of Stanford kids. If there were a betting market in, you know, what's the future trend? I'm long. How long I should be, I really don't know. But I visit Stanford not every year, but regularly, and the selection in who it is I meet...

We're talking about selection, and they're very impressive. Emergent Ventures has funded a lot of people from Stanford. As far as I can tell, as a group, they're doing very well.

So that is of no concern to me. If you're worried about the Stanford kids, something seems off in the level of salience and focus in the argument because they're overall doing great. They have high standards, and that's good too.

Paul McCartney thought John Lennon was a crummy guitar player, and John thought a lot of Paul's songs were crap. In a way they're right, in a way they're wrong. But it's a sign what high standards the Beatles had.

How old are you, by the way?

Dwarkesh Patel 00:20:30

Tyler Cowen 00:20:30

Okay, now go back however many years. Was there a 24-year-old like you doing the equivalent of podcasting? It's just clearly better now than it was back then.

And you were doing this a few years ago, so it's just obvious to me the young peaks are doing better. You're proof.

Dwarkesh Patel 00:20:49

Wasn't Churchill, by the time he was 24, an international correspondent in Cuba and India? I think he was the highest-paid journalist in the world by the time he was 24.

Tyler Cowen 00:21:00

I don't know. I mean, what was he paid, and how good was his journalism? I just don't know. I don't think it's that impressive a job to be an international journalist.

What does it pay people now? He did some good things later on, but most of his early life he's a failure.

And then ask the Irish. Getting back to Patrick, ask the Irish and people from India what they think of younger Churchill, and you'll get an earful. His real great achievement, I don't know how old he was exactly, but it's quite late in his life. Until then, he's a destructive failure.

There was no one on Twitter to tell him, "Hey, Winston, you need to rethink this whole Irish thing." Today there would be. Sam will do it, right? Sam will tweet at Winston Churchill, "Got to rethink the Irish thing." And Sam is persuasive.

Dwarkesh Patel 00:21:57

If you read his aphorisms, I think he would have actually been pretty good on Twitter.

Tyler Cowen 00:22:01

Maybe. But again, what does the equilibrium look like when everything changes? Clearly he was an impressive guy, especially given how much he drank.

Dwarkesh Patel 00:22:12

Okay, so even if you don't buy the Stanford kids, if you don't buy the young kids, the other trend he was talking about, where if you look at the leaders in government, whatever you think of Trump, Biden, we're not talking about Jefferson and Hamilton anymore. How do you explain that trend?

Tyler Cowen 00:22:30

Well, Jefferson and Hamilton, they're founders, right? And they were pretty young at the time. You can do great things when you're founding a country in a way that just cannot be replicated later on.

Putting aside the current year, which I think is weird for a number of reasons, I think mostly we have had impressive candidates. Most of the U.S. bureaucracy in Washington I think is pretty impressive: generals, national security people, top people in agencies, people at Treasury, people at the Fed. I interact with these groups pretty often.

Overall, they're impressive, and I've seen no signs they're getting worse. Now, if you want to say the two candidates this year, again, there's a lot we're not going to talk about, but there is a lot you could say on the negative side. Yes.

But like Obama, Romney, whichever one you like, I think these are two guys who should be running for president, and that was not long ago.

Dwarkesh Patel 00:23:27

So then there's a bunch of candidates running who are good. What goes systematically wrong in the selection process is the two who are selected are not even as good as the average of all the candidates.

Tyler Cowen 00:23:37

You mean this theory?

Dwarkesh Patel 00:23:37

And I'm not talking about America in particular. If the theory is just noise, it seems like it skews one way.

Tyler Cowen 00:23:43

Well, the Democrats had this funny path with Biden, and Kamala didn't get through the electoral process in the normal way. So that just means you get weirdness, whatever you think of her as a candidate.

Trump, whom I do not want to win, I think he is extraordinarily impressive in some way, which along a bunch of dimensions exceeds a lot of earlier candidates. I just don't like the overall package. But I would not point to him as an example of low talent.

I think he's a supreme talent but harnessed to some bad ends.

Dwarkesh Patel 00:24:18

If you look at the early 20th century, some of the worst things that happened to progress is just these psychopathic leaders. What happened? Why did we have so many just awful, awful leaders in the early 20th century?

Tyler Cowen 00:24:30

Well, give me like a country and a name and a time period, and I'll try to answer it.

Dwarkesh Patel 00:24:34

He was from the university.

Tyler Cowen 00:24:37

That's what was wrong with him, right? Just think of what school it was.

Dwarkesh Patel 00:24:44

Who? Woodrow Wilson?

Tyler Cowen 00:24:44

Yeah, one of our two or three worst presidents on civil rights. World War I, he screwed up. The peace process, he screwed up.

Indirectly led to World War II. Reintroduced segregation of civil service in some key regards. Just seemed he was a nasty guy and should have been out of office sooner given his health and state of mind. So he was terrible.

But he was, on paper, a great candidate. Hoover on paper was a great candidate and was an extremely impressive guy. I think he made one very bad set of decisions relating to deflation and letting nominal GDP fall. But my goodness, there's a reason they called it the Hoover Institution after Hoover.

Dwarkesh Patel 00:25:25

But the Hitler, Stalin, Mao's. Was there something that was going on that explains why that was just a crummy time for world leaders?

Tyler Cowen 00:25:34

I don't think I have a good macro explanation of that whole trend, but I would say a few things. That's right after the period where the world is changing the most. I think when you get big new technologies, and this is relevant for AI, you get a lot of new arms races.

Sometimes the bad people win those arms races. So at least for quite a while, you had Soviet Russia and Nazi Germany winning some arms races, and they're not democratic systems. Later you have China with Mao being not a democratic system.

And then you have a mix of bad luck. Stalin and Mao just draw the urn. You could have gotten less crazy people than what you got.

And I agree with Hayek, the worst get to the top under autocracy. But like, they're that bad. That was just some bad luck too.

There are other things you could say, but I think we had a highly disoriented civilization. You see it in aesthetics approaching beginnings of World War I. Art, music, radically changing. People feel very disoriented.

There's a lot up for grabs. Imperialism, colonialism start to be factors. There wasn't like a stable world order, and then you had some bad luck tossed into that. And all of a sudden, these super destructive weapon systems compared to what we had had, and it was awful.

I'm not pretending that's some kind of full explanation, but that would be like a partial start.

Dwarkesh Patel 00:26:55

You compared our current period to 17th-century England, where you have a lot of new ideas and things go topsy-turvy. What's your theory of why things go topsy-turvy at the same time when these eras come about? What causes this volatility?

Tyler Cowen 00:27:07

I don't think I have a general theory. If you want to talk about 17th-century England, they had the scientific revolution. You have the rise of England as a true global power. The navy becomes much more important, and the Atlantic trade route becomes much more important because of the new world.

Places like the Middle East, India, and China, that were earlier--you know, Persia had major roles--they're crumbling, partially for reasons of their own. And that's going to help the relative power of the more advanced European nations. England has a lot of competition from the Dutch Republic and France.

This is happening at the same time that, for the first time in human history that I know of, we have sustained economic growth, according to Greg Clark, starting in the 1620s of about 1% a year. And that is compounding--again, slow numbers, but compounding. England is the place that gets the compounding at 1% starting in the 1620s, and somehow they go crazy: civil war, kill the king, all these radical ideas. Libertarianism comes out of that, which I really like: John Milton, John Locke.

Also, this brutal conquest of the new world. Very good and very bad coming together. I think it should be seen as these sets of processes where very good and very bad come together, and we might be in for a dose of that again now, soon.

Dwarkesh Patel 00:28:23

It seems like a simple question, but how do you make sure we get the good things and not the crazy civil war as well?

Tyler Cowen 00:28:28

You try at the margin to nudge toward the better set of things. But it's possible that all the technical advances that recently have been unleashed, now that the great stagnation is over, which of course include AI, will mean another crazy period. It's quite possible. I think the chance of that is reasonably high.

Dwarkesh Patel 00:28:48

What's your most underrated cult?

Tyler Cowen 00:28:51

Most underrated cult? Progress studies.

Dwarkesh Patel 00:28:57

I think you called peak EA right before SBF fell.

Tyler Cowen 00:29:01

That's right. I was at an EA meeting, and I said, "Hey everyone, this is as good as it gets. Enjoy this moment. It's all basically going to fall apart. You're still going to have some really significant influence, but you won't feel like you have continued to exist as a movement."

That's what I said, and they were shocked. They thought I was insane, but I think I was mostly right.

Dwarkesh Patel 00:29:23

What specifically did you see? Was the exuberance too high? Did you see SBF's balance sheet?

Tyler Cowen 00:29:31

Well, I was surprised when SBF was insolvent. I thought it was a high-risk venture that had no regulatory defense and would end up being worth zero, but I didn't think he was actually playing funny games with the money. I just have a long history of seeing movements in my lifetime, from the 1960s onwards, including libertarianism.

There are common patterns that happen to them all. We're here in Berkeley. My goodness, free speech movement. Where's free speech in Berkeley today? How'd that work out in the final analysis?

It's a very common pattern. Just to think, "Wow, the common pattern is going to repeat itself," and then you see some intuitive signs, and you're just like, "Yeah, that's going to happen." And the private benefits of belonging to EA, they were very real in terms of the people you could hang out with or the sex you could have. But they didn't seem that concretely crystallized to me in institutions, the way they are in Goldman Sachs or legal partnerships.

That struck me as very fragile, and I thought that at the time as well.

Dwarkesh Patel 00:30:31

I'm not sure I understood. What were the intuitive signs?

Tyler Cowen 00:30:35

Well, not seeing the very clear, crystallized, permanent incentives to keep on being a part of the institutions. A bit of excess enthusiasm from some people, even where they might have been correct in their views. Some cult-like tendencies.

The rise of it being so rapid that it was this uneasy balance of secular and semi-religious elements, that tends to flip one way or the other or just dissolve. So, I saw all those things, and I just thought the two or three best ideas from this are going to prove incredibly important still. And from this day onwards, I don't give up that belief at all. But just as a movement, I thought it was going to collapse.

Dwarkesh Patel 00:31:16

When do we hit peak progress studies?

Tyler Cowen 00:31:19

You know, when Patrick and I wrote the piece on progress and progress studies, he and I thought about this, talked about it. I can't speak for him, but my view at least was that it would never be such a formal thing or controlled or managed or directed by a small group of people or trademarked. It would be people doing things in a very decentralized way that would reflect a general change of ethos and vibe.

So, I hope it has, in many ways, a gentler but more enduring trajectory. And I think so far I'm seeing that. I think in a lot of countries, science policy will be much better because of progress studies.

That's not proven yet. You see some signs of that. You wouldn't say it's really flipped, but a lot of reforms. You're in an area like no one else has any idea, much less a better idea or a good idea.

And some modestly small number of people with some talent will work on it and get like a third to half of what they want. That will have a huge impact, and if that's all it is, I'm thrilled. I think it will be more than that.

Dwarkesh Patel 00:32:23

I asked Patrick yesterday, how do you think about progress studies differently now that you know AI is a thing that's happening?

Tyler Cowen 00:32:29

Yeah.

Dwarkesh Patel 00:32:29

What's the answer for you?

Tyler Cowen 00:32:31

I don't think about it very differently. But again, if you buy my view about slow takeoff, why should it be that different? We'll have more degrees of freedom.

So, if you have more degrees of freedom, all your choices, decisions, issues, and problems are more complex. You're in more need of some kind of guidance. All inputs other than the AI rise in marginal value.

Since I'm an input other than the AI, or I hope, that means I rise in marginal value, but I need to do different things. So, I think of myself over time as less a producer of content and more a connector, people person, developing networks in a way where if there somehow had been no transformers and LLMs, I would have stayed a bit more as the producer of content.

Dwarkesh Patel 00:33:15

When I was preparing to interview you, I asked Claude to take your persona. Compared to other people I tried this with, it actually works really well with you.

Tyler Cowen 00:33:25

Because I've written a lot on the internet.

Dwarkesh Patel 00:33:27

Yeah.

Tyler Cowen 00:33:27

That's why this is my immortality, right?

Dwarkesh Patel 00:33:31

That's right. I've heard you say in the past you don't expect to be remembered in the future. At the time, I don't know if you were considering that because of your volumes of text, you're going to have an especially salient persona in future models. How does that change your estimation of your intellectual contribution going forward?

Tyler Cowen 00:33:47

I do think about this. The last book I wrote is called Goat: Who's the Greatest Economist of All Time? I'm happy if humans read it, but mostly I wrote it for the AIs. I wanted them to know I appreciate them.

My next book, I'm writing even more for the AIs. Again, human readers are welcome. It will be free.

But who reviews it? Is TLS going to pick it up? It doesn't matter anymore. The AIs will trawl it and know I've done this, and that will shape how they see me in, I hope, a very salient and important way.

As far as I can tell, no one else is doing this. No one is writing or recording for the AIs very much. But if you believe even a modest version of this progress—and I'm modest in what I believe relative to you and many of you—you should be doing this. You're an idiot if you're not writing for the AIs.

They're a big part of your audience, and their purchasing power will accumulate over time. They're going to hold a lot of crypto. We're not going to give them bank accounts, at least not at first.

Dwarkesh Patel 00:34:54

What part of your persona will be least captured by the AIs if they're only going by your writing?

Tyler Cowen 00:34:59

I think I should ask that as a question to you. What's your answer? I don't think AIs are that funny yet. They're better on humor than many people allege, but I don't use them for humor.

Dwarkesh Patel 00:35:10

It's interesting that you learn so much about a person when you're interviewing them for a job or for Emergent Ventures. You can read their application, but just in the first 10 minutes, their vibe—three minutes, but yes. Whatever's going on there that's so informative, the AIs won't have, just from the writing.

Tyler Cowen 00:35:24

Not at first. But I think I've heard of projects—this is secondhand, I'm not sure how true it is—but interviews are being recorded by companies that do a lot of interviews. These will be fed into AIs and coded in particular ways.

Then people, in essence, will be tracked through either their progress in the company or LinkedIn profile. We're going to learn something about those intangibles at some rate. I'm not sure how that will go, but I don't view it as something we can never learn about.

Dwarkesh Patel 00:35:55

Do you actually have a conscious theory of what's going on when you get on a call with somebody, and three minutes later you're like, "You're not getting the grant"? What happens?

Tyler Cowen 00:36:03

Well, often there's one question the person can't answer. If it's someone applying with a nonprofit idea, plenty of people have good ideas for nonprofits, and I see these all the time. But when you ask them the question, "How do you think about building out your donor base?" it's remarkable to me how many people have no idea how to answer that.

Without that, you don't have anything. So it depends on the area. But that would be an example of an area where I ask that question pretty quickly, and a significant percentage can't answer it.

I'm still willing to say, "Well, come back to me when you have a good case." Oddly, none of those people have ever come back to me that I can think of, but I think over time some will. That's a very concrete thing.

But there are other intangibles. Just when you see what the person thinks and talks about too much. So if someone wants to get an award only for their immigration status, that to me is a dangerous sign.

Even though at the same time, usually you're looking for people who want to come to the US, whether they can do it or not. There are just a lot of different signals you pick up, like people somehow have the wrong priorities, or they're referring to the wrong status markers. It comes through more than you would think.

Dwarkesh Patel 00:37:15

If you had the transcript of the call, but you couldn't see the video, you would say no. But in the case where you could see the video, you might say yes if you see the transcript. What happens in those cases?

Tyler Cowen 00:37:25

Having only the transcript would be worth much, much less, I would say, if that's what you're asking. Yeah, it would be maybe 25% of the value.

Dwarkesh Patel 00:37:35

And what's going on with the 75%?

Tyler Cowen 00:37:36

We don't know. But I think you can become much better at figuring out the other 75%, partly just with practice.

Dwarkesh Patel 00:37:43

Yesterday Patrick was talking about these concentrations of talent that he sees in the history of science, with these labs that have six or seven Nobel Prizes. He was also talking about the second employee at Stripe, Greg Brockman, who wasn't visible to other parts of the startup ecosystem in the same way. What's your theory of what's going on? Why are these clusters so limited? What's actually being inherited and transmitted here?

Tyler Cowen 00:38:07

Well, Patrick was being too modest. I thought his answer there was quite wrong, but he sort of knows better. He was able to hire Greg Brockman because he's Patrick. It's very simple. He wasn't going to come out and just say that, and he may even deny it a bit to himself.

But if you're Patrick and John, you're going to attract some Greg Brockmans. And if you're not, it's just way, way harder because the Greg Brockmans are pretty good at spotting who are the Patricks and Johns. In a way, that's just pushing it back a step, but at least it's answering part of the question in a way that Patrick didn't because he was modest and humble.

Dwarkesh Patel 00:38:42

It seems like that makes the clusters less valuable then, because if Greg Brockman is just Greg Brockman, and Greg chose Patrick and John because they're Patrick and John, and Patrick and John chose Greg because he's Greg, it wasn't that they made each other great. It was just talent sees talent, right?

Tyler Cowen 00:38:56

Well, they make each other much better, just like Patrick and John made each other much better and still do. But you're getting back to my favorite human bottlenecks. Thank you. I'm fully on board with what you're saying.

To get those—how many Beatles are there? It's amazing how much stuff doesn't really last. It's just super scarce, achievement at the very highest levels.

That's this extreme human bottleneck. And AI, even a pretty strong AI, it remains to be seen how much it helps us with that.

Dwarkesh Patel 00:39:25

I'm guessing ever since you wrote the progress studies article, you got a lot of applications for Emergent Ventures from people who want to do some progress studies thing on the margins. Do you wish you got fewer of those proposals or more of them? Do you just wish they were unrelated?

Tyler Cowen 00:39:38

I don't know. To date, a lot of them have been quite good, and many of them are people who are here. There's a danger that as the thing becomes more popular, at the margin they become much worse.

I guess I'm expecting that, so maybe mentally I'm raising my own bar on those. Maybe over time, I find it more attractive if the person is interested in, say, the Industrial Revolution. If they're interested in progress studies, capital P, capital S, I'm growing more skeptical of that over time.

Not that I think there's anything intrinsically bad about it—I'm at a progress studies conference with you. But still, when you think about selection and adverse selection, I think you've got to be very careful and keep on raising the bar there. It's still probably good if those people do something in capital P, capital S progress studies, but it's not necessarily good for Emergent Ventures to just keep on funding the number.

Dwarkesh Patel 00:40:34

If you buy your picture of AI where it increases growth rates by half a percentage point, what does your portfolio look like?

Tyler Cowen 00:40:41

I can tell you what my portfolio is. It's a lot of diversified mutual funds with no trading, pretty heavily US-weighted, and nothing in it that would surprise you. My wife works for the SEC, so we're not allowed to do most things.

Even to buy most individual stocks, you may not be allowed to do it, and certainly not allowed derivatives or shorting anything. But if somehow that restriction were not there, I don't think it would really matter. So, buy and hold, diversify, hold on tight, and make sure you have some cheap hobbies and are a good cook.

Dwarkesh Patel 00:41:17

Why aren't you more leveraged if you think the growth rate's going to go up even slightly?

Tyler Cowen 00:41:22

Well, I also have this view: maybe a lot of the equity premium is in the past. People, especially in this part of the world, are very good at internalizing value, and it will be held and earned in private markets and by VCs rather than public pension funds. Why give it to them?

I think Silicon Valley has figured this out. Sand Hill Road has figured it out. So, what one can do with public equities is unclear.

What private deals I can get in on with my really tiny sum of wealth is pretty clear. So I'm left with that.

Money for me is not what's scarce; time is scarce. I do have some very cheap hobbies, and I feel I'm in very good shape in that regard.

Dwarkesh Patel 00:42:05

That being said, I think you could get pretty good deal flow even with your portfolio.

Tyler Cowen 00:42:10

I don't know. You can only focus on so many things. If I have good deal flow in emergent ventures, which I'm not paid to do, say I had a billion dollars from whatever.

I wouldn't have any better way of spending that billion dollars than buying myself a job doing emergent ventures or whatever. So, I'm already where I would be if I could buy the thing for a billion dollars. I'm just not that focused on it.

I think it's good that you limit your areas of focus. For some people, it's just money. I think that's great. I don't begrudge them that at all. I think it's socially valuable. Let's have more of it. Bring it on. But it's just not me.

When I started my career, it was really unknown that an economist could really earn anything at all. There were no tech jobs with billionaires. Finance was a low-paying field. When I started choosing a career, it was not a thing. There wasn't this fancy Goldman Sachs. It was a slow, boring thing.

Programmers were weird people in basements. And then an economist, you would earn maybe $40,000 a year. Two people, Milton Friedman and Paul Samuelson, had outside income, and you had no expectation that you would ever earn more than that.

I went into this with all of that. Relative to that, I feel so wealthy. Just like, oh, you can sell some books, or you can give a talk. I just feel like I am a billionaire now.

If anything, I want to become what I've called an information trillionaire. I'm not going to make that, but I think it's a good aspiration to have. Just collect more information and be an information trillionaire. Like Dana Joya has that same goal. He and I have talked about this. I think that's a very worthy goal.

Dwarkesh Patel 00:43:57

Was there a second field that you were considering going to other than economics?

Tyler Cowen 00:44:01

It was either economics or philosophy. And I saw, back then--this would be the late 1970s--it was much harder to get a job as a philosopher, though not impossible the way it sort of is now. They were paid less and just had fewer opportunities. So I thought, well, I'll do economics. But I think in a way, I've always done both.

Dwarkesh Patel 00:44:20

Okay, I really want to go back to this diffusion thing we were talking about at the beginning with economic growth.

Tyler Cowen 00:44:24

Yeah.

Dwarkesh Patel 00:44:24

Because I feel like I don't... What am I not understanding? I hear the word "diffusion," I hear the word "bottlenecks," but I just don't have anything concrete in my head when I hear that. What are people who are thinking about AI missing here when they just plug these things into their models?

Tyler Cowen 00:44:39

I'm not sure I'm the one to diagnose, but I would say when I'm in the Bay Area, the people here to me are the smartest people I've ever met, on average. They're the most ambitious, dynamic, and smartest, by a clear grand slam compared to New York City or London or anywhere. That's awesome, and I love it.

But I think a side result of that is that people here overvalue intelligence. Their models of the world are built on intelligence mattering much, much more than it really does. Now, people in Washington don't have that problem. We have another problem, and that needs to be corrected, too.

But I just think if you could root that out of your minds, it would be pretty easy to glide into this expert consensus view that tech diffusion is universally pretty slow. And that's not going to change. No one's built a real model to show why it should change, other than sort of hyperventilating blog posts about everything's going to change right away.

Dwarkesh Patel 00:45:43

The model is that you can have AIs make more AIs, right? That you can have returns.

Tyler Cowen 00:45:47

Ricardo knew this, right? He didn't call it AI, but Malthus and Ricardo, they all talked about this. It was just humans for them. Well, people then would breed. They would breed at some pretty rapid rate. There were diminishing returns to that.

You had these other scarce factors. Classical economics figured that out. They were too pessimistic, I would say. But they understood the pessimism intrinsic in diminishing returns in a way that people in the Bay Area do not.

And it's better for them that they don't know it. But if you're just trying to inject truth into their veins rather than ambition, diminishing returns is a very important idea.

Dwarkesh Patel 00:46:24

In what sense was that pessimism correct? Because we do have seven billion people, and we have a lot more ideas as a result. We have a lot more industries.

Tyler Cowen 00:46:30

Yeah, I said they were too pessimistic, but they understood something about the logic of diffusion, where if they could see AI today, I don't think they would be so blown away by it. They'd say, "Oh, you know, I read Malthus." Ricardo would say, "Malthus and I used to send letters back and forth. We talked about diminishing returns. This will be nice. It'll extend the frontier, but it's not going to solve all our problems."

Dwarkesh Patel 00:46:55

One concern you could have about progress in general is, if you look at the famous Adam Smith example, you've got that pin maker, and the specialization obviously has all these efficiencies. But the pin maker is just doing this one thing. Whereas if you're in the ancestral environment, you get to basically negotiate with every single part of what it takes to keep you alive, and every other person in your tribe does. Is individuality lost with more specialization, more progress?

Tyler Cowen 00:47:22

Well, Smith thought it would be. I think compared to his time, we have much more individuality, most of all in the Bay Area. That's a good thing.

I worry the future with AI, that a kind of demoralization will set in, in some areas. I think there'll be full employment pretty much forever. That doesn't worry me.

But what we will be left doing, what exactly it will be, and how happy it will make us... Again, I don't have pessimistic expectations. I just see it as a big change. I don't feel I have a good prediction. And if you don't have a good prediction, you should be a bit wary and just say, "Okay, we're going to see." But, you know, some words of caution.

Dwarkesh Patel 00:48:03

Are merited when you're learning about a new field? The vibe I get from you when you're doing a podcast is that you're picking up the long tail of different—you talk to interesting people, or you read the book that nobody else would have considered. How often do you just have to read the main textbook versus looking at the esoteric thing? How do you balance that trade-off?

Tyler Cowen 00:48:24

Well, I haven't interviewed that many scientists. Ed Boyden would be one, Richard Prum, the ornithologist from Yale—those are very hard preps. I think those are two excellent episodes, but I'm limited in how many I can do by my own ability to prepare.

I like doing historians the most because the prep is a lot of work, but it's easy, fun work for me. I know I always learn something.

Now I'm prepping for Stephen Kotkin, who's an expert on Stalin and Soviet Russia. That's been a blast. I've been doing that for four months, reading dozens of books, and it's very automatic.

Whereas you try to figure out what Ed Boyden is doing with light shining into the brain, it's like, "Oh my goodness, do I understand this at all?" Or am I like the guy who thinks the demand curve slopes upward? So it just means I'm only going to do a smallish number of scientists, and that's a shame. But maybe AI can fill in for us there.

Dwarkesh Patel 00:49:19

You recommended a book to me, Stalin's Library, which talks about the different books that Stalin read and the fact that he was kind of a smart, well-read guy. The book also mentioned in the early chapters that in all his annotations, if you look through all his books, there's never anything that even hints that he doubted Marxism.

Tyler Cowen 00:49:38

That's right. There's a lot of other evidence that that's the correct portrait.

Dwarkesh Patel 00:49:41

What's going on there? A smart guy who's read all this literature, all these different things, never even questions Marxism. What do you think?

Tyler Cowen 00:49:48

I think the culture he came out of had a lot of dogmatism to begin with. I mean both Leninism, which is extremely dogmatic—Lenin was his mentor.

Like Patrick's thing about the Nobel laureates, it happens in insidious ways, too. So, Lenin is the mentor of Stalin.

Soviet culture, communist culture, and then Georgian culture, which, appealing and fun-loving and wine-drinking and dance-heavy as it is, there's something about it that's a little, you know, you pound the fist down and you tell people over the table how things are. He had all those stacked vertically. Then we got this bad genetic luck of the draw on Stalin, and it turned out obviously pretty terrible.

Dwarkesh Patel 00:50:34

And if you buy Hayek's explanation that the reason he rose to the top is just because the most atrocious people win in autocracies, what is that missing?

Tyler Cowen 00:50:43

I think what Hayek said is subtler than that. And I wouldn't say it's Hayek's explanation; I would say Hayek pinpointed one factor. There are quite a few autocracies in the world today where the worst people have not risen to the top.

The UAE would be, I think, the most obvious example. I've been there. As far as I can tell, they're doing a great job running the country.

There are things they do that are nasty and unpleasant. I would be delighted if they could evolve into a democracy, but the worst people are not running the UAE, this I'm quite sure of.

So, it's a tendency. There are other forces, but culture really matters. Hayek is writing about a very specific place and time.

It really surprised me. There are these family-based Gulf monarchies with very large, clannish interest groups of thousands of people that have proven more stable and more meritocratic than I ever would have dreamed, say in 1980. I know I don't understand it, but I just see it in the data. It's not just UAE; there are a bunch of countries over there that have outperformed my expectations, and they all have this broadly common system, actually. And they're not ruled by their worst people.

Dwarkesh Patel 00:51:51

That makes you wonder, when you go around the world—because I know you go outside the Bay Area and the East Coast as well—and you talk about progress studies related ideas, what's the biggest difference in how they're received versus the audience here?

Tyler Cowen 00:52:02

Well, the audience here is so different. You're the outlier place of America. And then where I normally am, outside of Washington, D.C., that's the other outlier place. And in a way, we're opposite outliers.

I think that's healthy for me, both where I live and that I come here a lot and that I travel a lot. But you all are so out there in what you believe. I'm not sure where to start.

You come pretty close to thinking in terms of infinities, on the creative side and the destructive side. And no one in Washington thinks in terms of infinities; they think at the margin. Overall, I think they're much wiser than the people here.

But I also know if everyone, or even more people, thought like the D.C. people, our world would end. We wouldn't have growth. They're terrible.

People in the EU are super wise. You have a meal with some sort of French person who works in Brussels—it's very impressive. They're cultured, they have wonderful taste, they understand all these different countries, they know something about Chinese porcelain. And if you lived in a world ruled by them, the growth rate would be negative 1%.

So there's some way in which all these things have to balance. I think the US has done a marvelous job at that, and we need to preserve that.

What I see happening—the UK used to do a great job at it. UK, somehow the balance is out of whack, and you have too many non-growth-oriented people in the cultural mix.

Dwarkesh Patel 00:53:33

The way you describe this French person you're having dinner with...

Tyler Cowen 00:53:36

These are real dinners. And the food is good, too.

Dwarkesh Patel 00:53:40

It kind of reminds me of you, in the sense that you're also well-cultured, and you know all these different esoteric things. What's the biggest difference between you and these French people you have dinner with?

Tyler Cowen 00:53:54

I don't think I'm well-cultured, would be one difference. There are many differences. First, I'm an American. I'm a regional thinker. I'm from New Jersey, so I'm essentially a barbarian, not a cultured person.

I have a veneer of culture that comes from having collected a lot of information. So I'll know more about culture than a lot of people, and that can be mistaken for being well-cultured, but it's really quite different. It's like a barbarian's approach to culture.

It's like a very artistic approach to being cultured and should be seen as such. So I feel the French person is very foreign from me.

And there's something about America they might find strange or repellent. And I'm just so used to it. I see intellectually how many areas we fall flat on are destructive, but it doesn't bother me that much because I'm so used to it.

Dwarkesh Patel 00:54:42

What is most misunderstood about autism?

Tyler Cowen 00:54:45

If you look at the formal definition, it's all about deficits that people have right now. If you define it that way, no one here is autistic. If you define it some other way, which maybe we haven't pinned down yet, a third of you here are autistic.

I don't insist on owning the definition. I think it's a bad word. It's like "libertarian." I would gladly give it away.

But there is some coherent definition where a third of you here probably would qualify. And this other definition where none of you would, it's like kids in mental homes banging their head against the wall. I don't know, it seems that whole issue needs this huge reboot.

Dwarkesh Patel 00:55:22

One frustration that tech people have is that they have very little influence, it seems, in Washington compared to how big that industry is. Industries that are much smaller will have much greater sway in Washington. Why is tech so bad at having influence in Washington?

Tyler Cowen 00:55:38

I think you're getting a lot more influence than maybe you realize, quickly, through national security reasons. The feds have not stopped the development of AI, whatever you think they should or should not do. It's basically proceeded.

National security as a lobby, they don't care about tech per se. But it has meant that on a whole bunch of things in the future, you will get your way a bit more than you might be expecting.

A key problem you have is so much of it is in one area, and it's also an area where there's a dominant political party. Even within that political party, there are many parts of California with a dominant faction.

Compare yourself to the community bankers who are in so many American counties, who have connections to every single person in the House of Representatives. Your issues in a way are not very partisan. The distortions you cause through your privileges are invisible to America.

It's not like Facebook, where some John Haidt has written some best-selling book complaining about what it is you do. There's not a best-selling book complaining about the community banks, and they are ruthless and powerful and get their way. I'm not going to tangle with them.

You all here are so far from that, in part because you're dynamic and you're clustered.

Dwarkesh Patel 00:56:54

Final question. Based on yesterday's session, it seems like Patrick's main misgiving with progress is that if you look at the young girl cohort, there's something that's not going great with them. You would hope that over time, progress means that people are getting better and better over time. If you buy his view of what's happening with young people, what's your main misgiving about progress? Where if you look at the time series data, you're not sure you like where this trend is going?

Tyler Cowen 00:57:21

Our main concern always should be war. I don't have any grand theory of what causes war, or if such a theory ever is possible. But I do observe in history that when new technologies come along, they are turned into instruments of war, and some terrible things happen.

You saw this in 17th-century England. You saw this with electricity and the machine gun. Nuclear weapons is a story in process.

I'm not sure that's ever going away. So, my main concern with progress is progress and war interact, and it can be in good ways. Like the world, à la Steven Pinker, has had relative peace.

That's fraying at the edges in the data. The numbers are now moving the wrong way, but it's still way better than most past time periods. We'll have to see where that goes.

There might be a ratchet effect where wars become more destructive. And even if they're more rare, when they come, each one's a real doozy. Whether we really are or ever can be ready for that, I'm just not sure.

And thank you very much, Dwarkesh.

Dwarkesh Patel 00:58:18

This will be the second session we have to end on a pessimistic note.

Tyler Cowen 00:58:21

The optimistic note is that we're here. Human agency matters. If we were all sitting around in the year 1000, we never could have imagined the world being anything like this, even a much poorer country.

It's up to us to take this extraordinary and valuable heritage and do some more with it. And that's why we're here. So, I say let's give it a go.

Dwarkesh Patel 00:58:44

Great note to end on. Thanks, Tyler.