Dwarkesh Podcast
Dwarkesh Podcast
Nat Friedman - Reading Ancient Scrolls, Open Source, & AI
0:00
-1:38:23

Nat Friedman - Reading Ancient Scrolls, Open Source, & AI

lost gospels, forgotten epics, & missing works of Aristotle.

It is said that the two greatest problems of history are: how to account for the rise of Rome, and how to account for her fall. If so, then the volcanic ashes spewed by Mount Vesuvius in 79 AD - which entomb the cities of Pompeii and Herculaneum in South Italy - hold history’s greatest prize. For beneath those ashes lies the only salvageable library from the classical world.

Nat Friedman was the CEO of Github form 2018 to 2021. Before that, he started and sold two companies - Ximian and Xamarin. He is also the founder of AI Grant and California YIMBY.

And most recently, he has created and funded the Vesuvius Challenge - a million dollar prize for reading an unopened Herculaneum scroll for the very first time. If we can decipher these scrolls, we may be able to recover lost gospels, forgotten epics, and even missing works of Aristotle.

We also discuss the future of open source and AI, running Github and building Copilot, and why EMH is a lie.

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(0:00:00) - Vesuvius Challenge

(0:30:00) - Finding points of leverage

(0:37:39) - Open Source in AI

(0:40:32) - Github Acquisition

(0:50:18) - Copilot origin Story

(1:11:47) - Nat.org

(1:32:56) - Questions from Twitter

Transcript

Dwarkesh Patel 

Today I have the pleasure of speaking with Nat Friedman, who was the CEO of GitHub from 2018 to 2021. Before that, he started and sold two companies, Ximian and Xamarin. And he is also the founder of AI Grant and California YIMBY. And most recently, he is the organizer and founder of the Scroll prize, which is where we'll start this conversation. Do you want to tell the audience about what the Scroll prize is? 

Vesuvius Challenge

Nat Friedman 

We're calling it the Vesuvius challenge. It is just this crazy and exciting thing I feel incredibly honored to have gotten caught up in. A couple of years ago, it was the midst of COVID and we were in a lockdown, and like everybody else, I was falling into internet rabbit holes. And I just started reading about the eruption of Mount Vesuvius in Italy, about 2000 years ago. And it turns out that when Vesuvius erupted, it was AD 79. It destroyed all the nearby towns, everyone knows about Pompeii. But there was another nearby town called Herculaneum. And Herculaneum was sort of like the Beverly Hills to Pompeii. So big villas, big houses, fancy people. And in Herculaneum, there was one enormous villa in particular. It had once been owned by the father in law of Julius Caesar, a well connected guy. And it was full of beautiful statues and marbles and art. But it was also the home to a huge library of papyrus scrolls. When the villa was buried, the volcano spit out enormous quantities of mud and ash, and it buried Herculaneum in something like 20 meters of material. So it wasn't a thin layer, it was a very thick layer. Those towns were buried and forgotten for hundreds of years. No one even knew exactly where they were, until the 1700s. In 1750 a farm worker who was digging a well in the outskirts of Herculaneum struck this marble paving stone of a path that had been at this huge villa. He was pretty far down when he did that, he was 60 feet down. And then subsequently, a Swiss engineer came in and started digging tunnels from that well shaft and they found all these treasures. Looting was sort of the spirit of the time. If they encountered a wall, they would just bust through it and they were taking out these beautiful bronze statues that had survived. And along the way, they kept encountering these lumps of what looked like charcoal, they weren't sure what they were, and many were apparently thrown away, until someone noticed a little bit of writing on one of them. And they realized they were papyrus scrolls, and there were hundreds and even 1000s of them. So they had uncovered this enormous library, the only library ever to have sort of survived in any form, even though it's badly damaged. And they were carbonized, very fragile. The only one that survived since antiquity. In a Mediterranean climate these papyrus scrolls rot and decay quickly. They'd have to be recopied by monks every 100 years or so, maybe even less.It’s estimated that we only have less than 1% of all the writing from that period. 

It was an enormous discovery to find these hundreds of papyrus scrolls underground. Even if they were not in good condition but still present. On a few of them, you can make out the lettering. In a well meaning attempt to read them people immediately started trying to open them. But they're really fragile so they turned to ash in your hand. And so hundreds were destroyed. People did things like, cut them with daggers down the middle, and a bunch of little pieces would flake off, and they tried to get a few letters off of a couple of pieces. 

Eventually there was an Italian monk named Piaggio. He devised this machine, under the care of the Vatican, to unroll these things very, very slowly, like half a centimeter a day. A typical scroll could be 15 or 20 or 30 feet long, and manage to successfully unroll a few of these, and on them they found Greek philosophical texts, in the Epicurean tradition, by this little known philosopher named Philodemus. But we got new text from antiquity, which is not a thing that happens all the time. Eventually, people stopped trying to physically unroll these things because so many were destroyed. In fact, some attempts to physically unroll the scrolls continued even into the 200s and they were destroyed. 

The current situation is we have 600 plus roughly intact scrolls that we can open. I heard about this and I thought that was incredibly exciting, the idea that there was information from 2000 years in the past. We don't know what's in these things. And obviously, people are trying to develop new ways and new technologies to open them. 

I read about a professor at the University of Kentucky, Brent Seales, who had been trying to scan these using increasingly advanced imaging techniques, and then use computer vision techniques and machine learning to virtually unroll them without ever opening them. They tried a lot of different things but their most recent attempt in 2019, was to take the scrolls to a particle accelerator in Oxford, England, called the diamond light source, and to make essentially an incredibly high resolution 3D X Ray scan. And they needed really high energy photons in order to do this. And they were able to take scans at eight microns. These really quite tiny voxels, which they thought would be sufficient. 

I thought this was like the coolest thing ever. We're using technologies to read this lost information from the past And I waited for the news that they had been decoded successfully. That was 2020 and then COVID hit, everybody got a little bit slowed down by that. Last year, I found myself wondering what happened to Dr. Seales and his scroll project. I reached out and it turned out they had been making really good progress. They had gotten some machine learning models to start to identify ink inside of the scrolls, but they hadn't yet extracted words or passages, it's very challenging. 

I invited him to come out to California and hang out and to my shock he did. We got to talking and decided to team up and try to crack this thing. The approach that we've settled on to do that is to actually launch an open competition. We've done a ton of work with his team to get the data and the tools and techniques and just the broad understanding of materials into a shape where smart people can approach it and get productive easily. And I'm putting up together with Daniel Gross, a prize in sort of like an X PRIZE or something like that, for the first person or team who can actually read substantial amounts of real text from one of these scrolls without opening them. 

We're launching that this week. I guess maybe it's when this airs. What gets me excited are the stakes. The stakes are kind of big. The six or eight hundred scrolls that are there, it's estimated that if we could read all of them, somehow the technique works and it generalizes to all the scrolls, then that would approximately double the total texts that we have from antiquity. This is what historians are telling me. 

So it's not like – Oh, we would get like a 5% bump or a 10% bump in the total ancient Roman or Greek text. No, we get all of the texts that we have, multiple Shakespeares is one of the units that I've heard. So that would be significant. We don't know what's in there, we've got a few Philodemus texts, those are of some interest. But there could be lost epic poems, or God knows what. So I'm really excited and I think there's like a 50% chance that someone will encounter this opportunity and get the data and get nerd sniped by it and we'll solve it this year.

Dwarkesh Patel 

I mean, really, it is something out of a science fiction novel. It's like something you'd read in Neal Stephenson or something. I was talking to Professor Seales before and apparently the shock went both ways. Because the first few emails, he was like – this has got to be spam. Like no way Nat Friedman is reaching out and has found out about this prize. 

Nat Friedman 

That's really funny because he was really pretty hard to get in touch with. I emailed them a couple times, but he just didn't respond. I asked my admin, Emily, to call the secretary of his department and say – Mr. Friedman requested me and then he knew there was something actually going on there. So he finally got on the phone with me and we got to zoom. And he's like, why are you interested in this? I love Brent, he's fantastic and I think we're friends now. We found that we think alike about this and he's reached the point where he just really wants to crack. They've taken this right up to the one yard line, this is doable at this point. They've demonstrated every key component. Putting it all together, improving the quality, doing it at the scale of a whole scroll, this is still very hard work. And an open competition seems like the most efficient way to get it done.

Dwarkesh Patel 

Before we get into the state of the data and the different possible solutions. I want to make tangible what could be gained if we can unwrap these? You said there's a few more 1000 scrolls? Are we talking about the ones in Philodemus’s layer or are we talking about the ones in other layers?

Nat Friedman 

You think if you find this crazy Villa that was owned by Julius Caesar's father in law, then we just dig the whole thing out. But in fact, most of the exploration occurred in the 1700s, through the Swiss engineer’s underground tunnels. The villa was never dug out and exposed to the air. You went down 50-60 feet and then you dig tunnels. And again, they were looking for treasure, not like a full archaeological exploration. So they mostly got treasure. In the 90s some additional excavations were done at the edge of the villa and they discovered a couple things. First, they discovered that it was a seaside Villa that faced the ocean. It was right on the water before the volcano erupted. The eruption actually pushed the shoreline out by depositing so much additional mud there. So it's no longer right by the ocean, apparently, I've actually never been. 

And they also found that there were two additional floors in the villa that the tunnels had apparently never excavated. And so at most, a third of the villa has been excavated. Now, they also know when they were discovering these papyrus scrolls that they found basically one little room where most of the scrolls were. And these were mostly these Philodemus texts, at least, that's what we know. And they apparently found several revisions, sometimes of the same text. The hypothesis is this was actually Philodemus’s working library, he worked here, this sort of epicurean philosopher. In the hallways, though, they occasionally found other scrolls, including crates of them. And the belief is, at least this is what historians have told me, and I'm no expert. But what they have told me is they think that the main library in this villa has probably not been excavated. And that the main library may be a Latin library, and may contain literary texts, historical texts, other things, and that it could be much larger. Now, I don't know how prone these classists are to wishful thinking. It is a romantic idea. But they have some evidence in the presence of these partly evacuated scrolls that were found in hallways, and that sort of thing. I've since gone and read a bunch of the firsthand accounts of the excavations. There are these heartbreaking descriptions of them finding like an entire case of scrolls in Latin, and accidentally destroying it as they tried to get it out of the mud and there were maybe 30 scrolls or something in there. There clearly was some other stuff that we just haven't gotten to.

Dwarkesh Patel 

You made some scrolls right?

Nat Friedman 

Yeah. This is a papyrus, and it's a grassy reed that grows on the Nile in Egypt. And for many 1000s of years they've been making paper out of it. And the way they do it is they take the outer rind off of the papyrus and then they cut the inner core into these strips. They lay the strips out parallel to one another and they put another layer to 90 degrees to that bottom layer. And they press it together in a press or under stones and let it dry out. And that's Papyrus, essentially. And then they'll take some of those sheets and glue them together with paste, usually made out of flour, and get a long scroll. You can still buy it, I bought this on Amazon, and it's interesting because it's got a lot of texture. Those fibers, ridges of the papyrus plant, and so when you write on it, you really feel the texture. I got it because I wanted to understand what these artifacts that we're working with. So we made an attempt to simulate carbonizing a few of these. We basically took a Dutch oven, because when you carbonized something and you make charcoal, it's not like burning it with oxygen, you remove the oxygen, heat it up and let it carbonize. We tried to simulate that with a Dutch oven, which is probably imperfect, and left it in the oven at 500 degrees Fahrenheit for maybe five or six hours. These things are incredibly light snd if you try to unfold them, they just fall apart in your hand very readily. I assume these are in somewhat better shape than the ones that were found because these were not in a volcanic eruption and covered in mud. Maybe that mud was hotter than my oven can go. And they’re just flakes, just squeeze it and it’s just dust in your hand. And so we actually tried to replicate many of the heartbreaking 1700s, 18th century unrolling techniques. They used rose water, for example, or they tried to use different oils to soften it and unroll it. And most of them are just very destructive. They poured mercury into it because they thought mercury would slip between the layers potentially. So yeah, this is sort of what they look like. They shrink and they turn to ash.

Dwarkesh Patel 

For those listening, by the way, just imagine the ash of a cigar but blacker and it crumbles the same way. It's just a blistered black piece of rolled up Papyrus.

Nat Friedman 

Yeah. And they blister, the layers can separate. They can fuse. 

Dwarkesh Patel 

And so this happened in 79 AD right? So we know that anything before that could be in here, which I guess could include? 

Nat Friedman 

Yes. What could be in there? I don't know. You and I have speculated about that, right? It would be extremely exciting not to just get more epicurean philosophy, although that's fine, too. But almost anything would be interesting in additive. The dream is – I think it would maybe have a big impact to find something about early Christianity, like a contemporaneous mention of early Christianity, maybe there'd be something that the church wouldn't want, that would be exciting to me. Maybe there'd be some color detail from someone commenting on Christianity or Jesus, I think that would be a very big deal. We have no such things as far as I know. Other things that would be cool would be old stuff, like even older stuff. There were several scrolls already found there that were hundreds of years old when the villa was buried. As per my understanding the villa was probably constructed about 100 years prior and they can date some of the scrolls from the style of writing. And so there was some old stuff in there. And the Library of Alexandria was burned 80 or 90 years prior. And so, again, maybe wishful thinking, but there's some rumors that some of those scrolls were evacuated and maybe some of them would have ended up at this substantial, prominent, Mediterranean villa. God knows what’ll be in there, that would be really cool. I think it'd be great to find literature, personally I think that would be exciting, like beautiful new poems or stories, we just don't have a ton because so little survived. I think I think that would be fun. You had the best, crazy idea for what could be in there, which was text which was GPT watermarks. That would be a creepy feeling.

Dwarkesh Patel 

I still can't get over just how much of a plot of a sci fi novel this is like. Potentially the biggest intact library from the ancient world that has been sort of stopped like a debugger because of this volcano. The philosophers of antiquity forgotten, the earliest gospels, there's so much interesting stuff there. But let's talk about what the data looks like. So you mentioned that they've been CT scanned, and that they built these machine learning techniques to do segmentation and the unrolling. What would it take to get from there to understand the actual content of what is within?

Nat Friedman 

Dr. Seales actually pioneered this field of what is now widely called virtual unwrapping. And he actually did not do it with these Herculaneum scrolls, these things are like expert mode, they're so difficult. I'll tell you why soon. But he initially did it with a scroll that was found in the Dead Sea in Israel. It's called The En-Gedi scroll and it was carbonized under slightly similar circumstances. I think there was a temple that was burned. The papyrus scroll was in a box. So it kind of is like a Dutch oven, it carbonized in the same way. And so it was not openable. it’d fall apart. So the question was, could you nondestructively read the contents of it? So he did this 3D X-ray, the CT scan of the scroll, and then was able to do two things. First, the ink gave a great X-ray signature. It looked very different from the papyrus, it was high contrast. And then second, he was able to segment the wines of the scroll, you know, throughout the entire body of the scroll, and identify each layer, and then just geometrically unroll it using fairly normal flattening computer vision techniques, and then read the contents. It turned out to be an early part of the book of Leviticus, something of the Old Testament or the Torah. And that was like a landmark achievement. Then the next idea was to apply those same techniques to this case. This has proven hard. There's a couple things that make it difficult, the primary one is that the ink used on the Herculaneum papyri is not very absorbent of X-Ray. It basically seems to be equally absorbent of X ray as the papyrus. Very close, certainly not perfectly. So you don't have this nice bright lettering that shows up on your Tomographic, 3D X-Ray. So you have to somehow develop new techniques for finding the ink in there. That's sort of problem one, and it's been a major challenge. And then the second problem is the scrolls are just really messed up. They were long and tightly wound, highly distorted by the volcanic mud, which not only heated them but partly deformed them. So just the segmentation problem of identifying each of these layers throughout the scroll is doable, but it's hard. Those are a couple of challenges. And then the other challenge, of course, is just getting access to scrolls and taking them to a particle accelerator. So you have to have scroll access and particle accelerator access, and time on those. It's expensive and difficult. Dr. Seales did the hard work of making all that happen. The good news is that in the last couple of months, his lab has demonstrated the ability to actually recognize ink inside these X rays with a convolutional neural network. I look at the X-ray scans and I can't see the ink, at least in any of the renderings that I’ve seen, but the machine learning model can pick up on very subtle patterns in the X-Ray absorption at high resolution inside these volumes in order to identify and we've seen that. So you might ask – Okay, how do you train a model to do that, because you need some kind of ground truth data to train the model? The big insight that they had was to train on broken off fragments of the Papyrus. So as people tried to open these over the years in Italy, they destroyed many of them, but they saved some of the pieces that broke off. And on some of those pieces, you can kind of see lettering. And if you take an infrared image of the fragment, then you can really see the lettering pretty well, in some cases. And so they think it's 930 nanometers, they take this little infrared image, now you've got some ground truth, then you do a CT scan of that broken off fragment, and you try to align it, register it with the image. And then you have data that you can use potentially to train a model. That turned out to work in the case of the fragments. 

I think this is sort of why now? This is why I think launching this challenge now is the right time, because we have a lot of reasons to believe it can work. In the core techniques, the core pieces have been demonstrated, it just all has to be put together at the scale of these really complicated scrolls. And so yeah, if you can do the segmentation, which is probably a lot of work, maybe there's some way to automate it. And then you can figure out how to apply these models inside the body of a scroll and not just to these fragments, then it seems seems like you could probably read lots of text,

Dwarkesh Patel 

Why did you decide to do it in the form of a prize, rather than just like giving a grant to the team that was already pursuing it, or maybe some other team that wants to take it out? 

Nat Friedman 

We talked about that. But I think what we basically concluded was the search space of different ways you could solve this is pretty big. And we just wanted to get it done as quickly as possible. Having a contest means lots of people are going to try lots of things and you know, someone's gonna figure it out quickly. Many eyes may make it shallow as a task. I think that's the main thing. Probably someone could do it but I think this will just be a lot more efficient. And it's fun too. I think it's interesting to do a contest and who knows who will solve it or how? People may not even use machine learning. We think that's the most likely approach for recognizing the ink but they may find some other approach that we haven't thought of.

Dwarkesh Patel 

One question people might have is that you have these visible fragments mapped out. Do we expect them to correspond to the burned off or the ashen carbonized scrolls that you can do machine learning on? Ground truth of one could correspond to the other? 

Nat Friedman 

I think that's a very legitimate concern, they're different. When you have a broken off fragment, there's air above the ink. So when you CT scan it, you have kind of ink next to air. Inside of a wrapped scroll, the ink might be next to Papyrus, right? Because it's pushing up against the next layer. And your model may not know what to do with that. So yeah, I think this is one of the challenges and sort of how you take these models that were trained on fragments and translate them to the slightly different environment. But maybe there's parts of the scroll where there is air on the inside and we know that to be true. You can sort of see that here. And so I think it should at least partly work and clever people can probably figure out how to make it completely work?

Dwarkesh Patel 

Yeah. So you said the odds are about 50-50? What makes you think that it can be done?

Nat Friedman 

I think it can be done because we recognized ink from a CT scan on the fragments and I think everything else is probably geometry and computer vision. The scans are very high resolution, they're eight micrometers. If you kind of stood a scroll on an end like this, they're taken in the slices through it. So it's like this in the Z axis from bottom to top there are these slices. And the way they're represented on disk is each slice is a TIFF file. And for the full scrolls, each slice is like 100-something megabytes. So they're quite high resolution. And then if you stack for example, 100 of these, they're eight microns, right? So 100 of these is 0.8 millimeters. Millimeter is pretty small. We think the resolution is good enough, or at least right on the edge of good enough that it should be possible. There's sort of like, seem to be six or eight pixels. For voxels I guess, across an entire layer of papyrus. That's probably enough. And we've also seen with the machine learning models, Dr. Seales, has got some PhD students who have actually demonstrated this at eight microns. So I think that the ink recognition will work. The data is clearly physically in the scrolls, right? The ink was carbonized, the papyrus was carbonized. But not a lot of data actually physically survived. And then the question is – Did the data make it into the scans? And I think that's very likely based on the results that we've seen so far. So I think it's just about a smart person solving this, or a smart group of people, or just a dogged group of people who do a lot of manual work that could also work, or you may have to be smart and dogged. I think that's where most of my uncertainty is, is just whether somebody does it.

Dwarkesh Patel 

Yeah, I mean, if a quarter of a million dollars doesn’t motivate you.

Nat Friedman 

Yeah, I think money is good. There's a lot of money in machine learning these days.

Dwarkesh Patel 

Do we have enough data in the form of scrolls that have been mapped out to be able to train a model if that's the best way to go? Because one question somebody might have is – Listen, if you already have this ground truth, why hasn't Dr. Seales’ team already been able to just train it?

Nat Friedman 

I think they will. I think if we just let them do it, they'll get it solved. It might take a little bit longer because it's not a huge number of people. There is a big space here. But I mean, yeah, if we didn't launch this contest, I'd still think this would get solved. But it might take several years. And I think this way, it's likely to happen this year.

Dwarkesh Patel 

Let's say the prize is solved. Somebody figures out how to do this and we can read the first scroll. You mentioned that these other layers haven't been excavated. How is the world going to react? Let's say we get about one of these mapped. 

Nat Friedman 

That's my personal hope for this. I always like to look for sort of these cheap leverage hacks, These moments where you can do a relatively small thing and it creates a.. you kick a pebble and you get an avalanche. The theory is, and Grant shares this theory, if you can read one scroll, and we only have two scanned scrolls, there's hundreds of surviving scrolls, it’s relatively expensive to use to book a particle accelerator. So if you can scan one scroll, and you know it works, and you can generalize the technique out, and it's going to work on these other scrolls, then the money which is probably in the low millions, maybe only $1 million to scan the remaining scrolls, will just arrive. It’s just too sweet of a prize not for that not to happen. And the urgency and kind of return on excavating the rest of the villa, will be incredibly obvious too. Because if there are 1000s more papyrus scrolls in there, and we now have the techniques to read them, then there's golden that mud and it's got to be dug out. It's amazing how little money there is for archaeology. Literally for decades, no one's been digging there. That’s my hope. That this is the catalyst that works, somebody reads it, they get a lot of glory, we all get to feel great. And then the diggers arrive in Herculaneum and they get the rest.

Finding points of leverage

Dwarkesh Patel 

I wonder if the budget for archaeological movies and games like Uncharted or Indiana Jones is bigger than the actual budget to do real world archaeology. But I was talking to some of the people before this interview, and that's one thing they emphasized is your ability to find these leverage points. 

For example, with California YIMBY, I don't know the exact amount you seeded it with. But for that amount of money, and for an institution that is that new, it is one of the very few institutions that has had a significant amount of political influence, if you look at the state of YIMBY in California and nationally today. How do you identify these things? 

There's plenty of people who have money who get into history or get into whatever subject, very few do something about it. How do you figure out where?

Nat Friedman 

I'm a little bit mystified by why people don't do more things too. I don't know, maybe you can tell me why aren’t more people doing things? I think most rich people are boring and they should do more cool things. So I'm hoping that they do that now. I think part of it is I just fundamentally don't believe the world is efficient. So if I see an opportunity to do something, I don't have a reflexive reaction that says – Oh, that must not be a good idea if it were a good idea someone would already be doing it. Like someone must be taking care of housing policy in California, right? Or somebody must be taking care of this or that.So first, I don't have that filter that says the world's efficient don’t bother, someone's probably got it covered. And then the second thing is I have learned to trust my enthusiasm. It gets me in trouble too, but if I get really enthusiastic about something and that enthusiasm persists, I just indulge it. And so I just kind of let myself be impulsive. There's this great image that I found and tweeted which said – we do these things not because they are easy, but because we thought they would be easy. That's frequently what happens. The commitment to do it is impulsive and it's done out of enthusiasm and then you get into it and you're like – oh my god, this is really much harder than we expected. But then you're committed and you're stuck and you're going to have to get it done. 

I thought this project would be relatively straightforward. I’m going to take the data and put it up. But of course 99% of the work has already been done by Dr. Seales and his team at the University of Kentucky. I am a kind of carpetbagger. I've shown up at the end here and try to do a new piece of it. 

Dwarkesh Patel 

The last mile is often the hardest. 

Nat Friedman 

Well it turned out to be fractal anyway. All the little bits that you have to get right to do a thing and have it work and I hope we got all of them. So I think that's part of it – just not believing that the world is efficient and then just allowing your enthusiasm to cause you to commit to something that turns out to be a lot of work and really hard. And then you just are stubborn and don't want to fail so you keep at it. I think that's it. 

Dwarkesh Patel 

The efficiency point, do you think that's particularly true just of things like California YIMBY or this, where there isn't a direct monetary incentive or... 

Nat Friedman 

No. Certain parts of the world are more efficient than others and you can't assume equal levels of inefficiency everywhere. But I'm constantly surprised by how even in areas you expect to be very efficient, there are things that are in plain sight that I see them and others don't. There's lots of stuff I don't see too. I was talking to some traders at a hedge fund recently. I was trying to understand the role secrets play in the success of a hedge fund. The reason I was interested in that is because I think the AI labs are going to enter a new similar dynamic where their secrets are very valuable. 

If you have a 50% training efficiency improvement and your training runs cost $100 million, that is a $50 million secret that you want to keep. And hedge funds do that kind of thing routinely. So I asked some traders at a very successful hedge fund, if you had your smartest trader get on Twitch for 10 minutes once a month, and on that Twitch stream describe their 30-day-old trading strategies. Not your current ones, but the ones that are a month old. What would that... How would that affect your business after 12 months of doing that? 

So 12 months, 10 minutes a month, 30-day look back. That’s two hours in a year. And to my shock, they told me about an 80% reduction in their profits. It would have a huge impact. 

And then I asked – So how long would the look back window have to be before it would have a relatively small effect on your business? And they said 10 years. So that I think is quite strong evidence that the world's not perfectly efficient because these folks make billions of dollars using secrets that could be relayed in an hour or something like that. And yet others don't have them or their secrets wouldn't work. So I think there are different levels of efficiency in the world, but on the whole, our default estimate of how efficient the world is is far too charitable. 

Dwarkesh Patel 

On the particular point of AI labs potentially storing secrets, you have this sort of strange norm of different people from different AI labs, not only being friends, but often living together, right? It would be like Oppenheimer living with somebody working on the Russian atomic bomb or something like that. Do you think those norms will persist once the value of the secrets is realized? 

Nat Friedman 

Yeah, I was just wondering about that some more today. It seems to be sort of slowing, they seem to be trying to close the valves. But I think there's a lot of things working against them in this regard. So one is that the secrets are relatively simple. Two is that you're coming off this academic norm of publishing and really the entire culture is based on sort of sharing and publishing. Three is, as you said, they all live in group houses, summer in polycules. There's just a lot of intermixing. And then it's all in California. And California is a non-compete state. We don't have non-competes. And so they'd have to change the culture, get everybody their own house, and move to Connecticut and then maybe it'll work. I think ML engineer salaries and compensation packages will probably be adjusted to try to address this because you don't want your secrets walking out the door. There are engineers, Igor Babushkin for example, who has just joined Twitter. Elon hired him to train. I think that's public, is that right? I think it is. 

Dwarkesh Patel 

It will be now. 

Nat Friedman 

Igor's a really, really great guy and brilliant but he also happens to have trained state-of-the-art models at DeepMind and OpenAI. I don't know whether that's a consideration or how big of an effect that is, but it's the kind of thing that would make sense to value if you think there are valuable secrets that have not yet proliferated. So I think they're going to try to slow it down, publishing has certainly slowed down dramatically already. But I think there's just a long way to go before you're anywhere in hedge fund or Manhattan Project territory, and probably secrets will still have a relatively short half-life. 

Open Source in AI

Dwarkesh Patel 

As somebody who has been involved in open-source your entire life, are you happy that this is the way that AI has turned out, or do you think that this is less than optimal? 

Nat Friedman 

Well, I don't know. My opinion has been changing. I have increasing worries about safety issues. Not the hijacked version of safety, but some industrial accident type situations or misuse. We're not in that world and I'm not particularly concerned about it in the short term. But in the long term, I do think there are worlds that we should be a little bit concerned about where bad things happen, although I don't know what to do about them. My belief though is that it is probably better on the whole for more people to get to tinker with and use these models, at least in their current state. For example Georgi Gerganov did a four-bit quantization of the LLama model this weekend and got it inferencing on a M1 or M2. I was very excited and I got that running and it's fun to play with. Now I've got a model that is very good, it's almost GPT-3 quality, and runs on my laptop. I've grown up in this world of tinkerers and open-source folks and the more access you have, the more things you can try. And so I think I do find myself very attracted to that. 

Dwarkesh Patel 

That is the scientist and the ideas part of what is being shared, but there's also another part about the actual substance, like the uranium in the atom-bomb analogy. As different sources of data realize how valuable their data is for training newer models, do you think that these things will go harder to scrape? Like Libgen or Archive, are these going to become rate-limited in some way or what are you expecting there? 

Nat Friedman 

First, there's so much data on the internet. The two primitives that you need to build models are – You need lots of data. We have that in the form of the internet, we digitized the whole world into the internet. And then you need these GPUs, which we have because of video games. So you take like the internet and video game hardware and smash together and you get machine learning models and they're both commodities. I don't think anyone in the open source world is really going to be data-limited for a long time. There's so much that's out there. Probably people who have proprietary data sets that are readily scrapable have been shutting those down, so get your scraping in now if you need to do it. But that's just on the margin. I still think there's quite a lot that's out there to work with. Look, this is the year of proliferation. This is a week of proliferation. We're going to see four or five major AI announcements this week, new models, new APIs, new platforms, new tools from all the different vendors. In a way they're all looking forwards. My Herculaneum project is looking backwards. I think it's extremely exciting and cool, but it is sort of a funny contrast. 

Github Acquisition

Dwarkesh Patel 

Before I delve deeper into AI, I do want to talk about GitHub. I think we should start with – You are at Microsoft. And at some point you realize that GitHub is very valuable and worth acquiring. How did you realize that and how did you convince Microsoft to purchase GitHub?

Nat Friedman 

I had started a company called Xamarin together with Miguel de Acaza and Joseph Hill and we built mobile tools and platforms. Microsoft acquired the company in 2016 and I was excited about that. I thought it was great. But to be honest, I didn't actually expect or plan to spend more than a year or so there. But when I got in there, I got exposed to what Satya was doing and just the quality of his leadership team. I was really impressed. And actually, I think I saw him in the first week I was there and he asked me – What do you think we should do at Microsoft? And I said, I think we should buy GitHub. 

Dwarkesh Patel 

When would this have been?

Nat Friedman 

This was like my first week. It was like March or April of 2016. Okay. And then he said – Yeah, it's a good idea. We thought about it. I'm not sure we can get away with it or something like that. And then about a year later, I wrote him an email, just a memo, I sort of said – I think it's time to do this. There was some noise that Google was sniffing around. I think that may have been manufactured by the GitHub team. But it was a good catalyst because it was something I thought made a lot of sense for Microsoft to do anyway. And so I wrote an email to Satya, a little memo saying – Hey, I think we should buy GitHub. Here's why. Here's what we should do with it. The basic argument was developers are making IT purchasing decisions now. It used to be the sort of IT thing and now developers are leading that purchase. And it's this sort of major shift in how software products are acquired. Microsoft really was an IT company. It was not a developer company in the way most of its purchases were made. But it was founded as a developer company, right? And so, you know, Microsoft's first product was a programming language. Yeah, I said – Look, the challenge that we have is there's an entire new generation of developers who have no affinity with Microsoft and the largest collection of them is at GitHub. If we acquire this and we do a merely competent job of running it, we can earn the right to be considered by these developers for all the other products that we do. And to my surprise, Satya replied in like six or seven minutes and said, I think this is very good thinking. Let's meet next week or so and talk about it. I ended up at this conference room with him and Amy Hood and Scott Guthrie and Kevin Scott and several other people. And they said – Okay, tell us what you're thinking. And I kind of said a little 20-minute ramble on it. And Satya said – Yeah, I think we should do it. And why don't we run it independently like LinkedIn. Nat, you'll be the CEO. And he said, do you think we can get it for two billion? And I said, we could try. He said Scott will support you on this. Three weeks later, we had a signed term sheet and an announced deal. And then it was an amazing experience for me. I'd been there less than two years. 

Microsoft was made up of and run by a lot of people who've been there for many years. And they trusted me with this really big project. That made me feel really good, to be trusted and empowered. I had grown up in the open source world so for me to get an opportunity to run Github, it's like getting appointed mayor of your hometown or something like that, it felt cool. And I really wanted to do a good job for developers. And so that's how it happened. 

Dwarkesh Patel 

That's actually one of the things I want to ask you about. Often when something succeeds, we kind of think it was inevitable that it would succeed but at the time, I remember that there was a huge amount of skepticism. I would go on Hacker News and the top thing would be the blog posts about how Microsoft's going to mess up GitHub. I guess those concerns have been alleviated throughout the years. But how did you deal with that skepticism and deal with that distrust? 

Nat Friedman 

Well, I was really paranoid about it and I really cared about what developers thought. There's always this question about who are you performing for? Who do you actually really care about? Who's the audience in your head that you're trying to do a good job for or impress or earn the respect of whatever it is. And though I love Microsoft and care a lot about Satya and everyone there, I really cared about the developers. I’d grown up in this open source world. And so for me to do a bad job with this central institution and open source would have been a devastating feeling for me. It was very important to me not to. So that was the first thing, just that I cared. And the second thing is that the deal leaked. It was going to be announced on a Monday and it leaked on a Friday. Microsoft's buying GitHub. The whole weekend there were terrible posts online. People saying we’ve got to evacuate GitHub as quickly as possible. And we're like – oh my god, it's terrible. And then Monday, we put the announcement out and we said we're acquiring GitHub. It's going to run as an independent company. And then it said Nat Friedman is going to be CEO. And I had, I don't want to overstate or whatever, but I think a couple people were like – Oh. Nat comes from open source. He spent some time in open source and it's going to be run independently. I don't think they were really that calmed down but at least a few people thought – Oh, maybe I'll give this a few months and just see what happens before I migrate off. And then my first day as CEO after we got the deal closed, at 9 AM the first day, I was in this room and we got on zoom and all the heads of engineering and product. I think maybe they were expecting some kind of longer-term strategy or something but I came in and I said – GitHub had no official feedback mechanism that was publicly available but there were several GitHub repos that the community members had started. Isaac from NPM had started one where he'd just been allowing people to give GitHub feedback. And people had been voting on this stuff for years. And I kind of shared my screen and put that up sorted by votes and said – We're going to pick one thing from this list and fix it by the end of the day and ship that, just one thing. And I think they were like – This is the new CEO strategy? And they were like – I don’t know, you need to do database migrations and can't do that in a day. Then someone's like maybe we can do this. We actually have a half implementation of this. And we eventually found something that we could fix by the end of the day. And what I hope I said was – what we need to show the world is that GitHub cares about developers. Not that it cares about Microsoft. Like if the first thing we did after the acquisition was to add Skype integration, developers would have said – Oh, we're not your priority. You have new priorities now. The idea was just to find ways to make it better for the people who use it and have them see that we cared about that immediately. And so I said, we're going to do this today and then we're going to do it every day for the next 100 days. It was cool because I think it created some really good feedback loops, at least for me. One was, you ship things and then people are like – Oh, hey, I've been wanting to see this fixed for years and now it's fixed. It's a relatively simple thing. So you get this sort of nice dopaminergic feedback loop going there. And then people in the team feel the excitement of shipping stuff. I think GitHub was a company that had a little bit of stage fright about shipping previously and sort of break that static friction and ship a little bit more felt good. And then the other one is just the learning loop. By trying to do lots of small things, I got exposed to things like – Okay, this team is really good. Or this part of the code has a lot of tech debt. Or, hey, we shipped that and it was actually kind of bad. How come that design got out? Whereas if the project had been some six-month thing, I'm not sure my learning would have been quite as quick about the company. There's still things I missed and mistakes I made for sure. But that was part of how I think. No one knows kind of factually whether that made a big difference or not, but I do think that earned some trust. 

Dwarkesh Patel 

I mean, most acquisitions don't go well. Not only do they not go as well, but like they don't go well at all, right? As we're seeing in the last few months with a certain one. Why do most acquisitions fail to go well? 

Nat Friedman 

Yeah, it is true. Most acquisitions are destructive of value. What is the value of a company? In an innovative industry, the value of the company boils down to its cultural ability to produce new innovations and there is some sensitive harmonic of cultural elements that sets that up to make that possible. And it's quite fragile. So if you take a culture that has achieved some productive harmonic and you put it inside of another culture that's really different, the mismatch of that can destroy the productivity of the company. Maybe one way to think about it is that companies are a little bit fragile. And so when you acquire them, it's relatively easy to break them. I mean, they're also more durable than people think in many cases too. Another version of it is the people who really care, leave. The people who really care about building great products and serving the customers, maybe they don't want to work for the acquirer and the set of people that are really load bearing around the long-term success is small. When they leave or get disempowered, you get very different behaviors. 

Copilot Origin Story

Dwarkesh Patel 

So I want to go into the story of Co-pilot because until ChatGPT it was the most widely used application of the modern AI models. What are the parts of the story you're willing to share in public? 

Nat Friedman 

Yeah, I've talked about this a little bit. GPT-3 came out in May of 2020. I saw it and it really blew my mind. I thought it was amazing. I was CEO of GitHub at that time and I thought – I don't know what, but we've got to build some products with this. And Satya had, at Kevin Scott's urging, already invested in OpenAI a year before GPT-3 came out. This is quite amazing. And he invested like a billion dollars. 

Dwarkesh Patel 

By the way, do you know why he knew that OpenAI would be worth investing at that point?

Nat Friedman 

I don't know. Actually, I've never asked him. That's a good question. I think OpenAI had already had some successes that were noticeable and I think, if your Satya and you're running this multi-trillion dollar company, you're trying to execute well and serve your customers but you're always looking for the next gigantic wave that is going to upend the technology industry. It's not just about trying to win cloud. It's – Okay, what comes after cloud? So you have to make some big bets and I think he thought AI could be one. And I think Kevin Scott deserves a lot of credit for really advocating for that aggressively. I think Sam Altman did a good job of building that partnership because he knew that he needed access to the resources of a company like Microsoft to build large-scale AI and eventually AGI. So I think it was some combination of those three people kind of coming together to make it happen. But I still think it was a very prescient bet. I've said that to people and they've said – Well, One billion dollars is not a lot for Microsoft. But there were a lot of other companies that could have spent a billion dollars to do that and did not. And so I still think that deserves a lot of credit. Okay, so GPT-3 comes out. I pinged Sam and Greg Brockman at OpenAI and they're like – Yeah, let's. We've already been experimenting with GPT-3 and derivative models and coding contacts. Let's definitely work on something. And to me, at least, and a few other people, it was not incredibly obvious what the product would be. 

Now, I think it's trivially obvious – Auto-complete, my gosh. Isn't that what the models do? But at the time my first thought was that it was probably going to be like a Q&A chatbot Stack Overflow type of thing. And so that was actually the first thing we prototyped. We grabbed a couple of engineers, SkyUga, who had come in from acquisition that we'd done, Alex Gravely, and started prototyping. The first prototype was a chatbot. What we discovered was that the demos were fabulous. Every AI product has a fantastic demo. You get this wow moment. It turns to maybe not be a sufficient condition for a product to be good. At the time the models were just not reliable enough, they were not good enough. I ask you a question 25% of the time you give me an incredible answer that I love. 75% of the time your answers are useless and are wrong. It's not a great product experience. 

And so then we started thinking about code synthesis. Our first attempts at this were actually large chunks of code synthesis, like synthesizing whole function bodies. And we built some tools to do that and put them in the editor. And that also was not really that satisfying. And so the next thing that we tried was to just do simple, small-scale auto-complete with the large models and we used the kind of IntelliSense drop down UI to do that. And that was better, definitely pretty good but the UI was not quite right. And we lost the ability to do this large scale synthesis. We still have that but the UI for that wasn't good. To get a function body synthesized you would hit a key. And then I don't know why this was the idea everyone had at the time, but several people had this idea that it should display multiple options for the function body. And the user would read them and pick the right one. And I think the idea was that we would use that human feedback to improve the model. But that turned out to be a bad experience because first you had to hit a key and explicitly request it. Then you had to wait for it. And then you had to read three different versions of a block of code. Reading one version of a block of code takes some cognitive effort. Doing it three times takes more cognitive effort. And then most often the result of that was like – None of them were good or you didn't know which one to pick. That was also like you're putting a lot of energy and you're not getting a lot out, sort of frustrating. Once we had that sort of single line completion working, I think Alex had the idea of saying we can use the cursor position in the AST to figure out heuristically whether you're at the beginning of a block and the code or not. And if it's not the beginning of a block, just complete a line. If it's the beginning of a block, show in line a full block completion. The number of tokens you request and when you stop gets altered automatically with no user interaction. And then the idea of using this sort of gray text like Gmail had done in the editor. So we got that implemented and it was really only kind of once all those pieces came together and we started using a model that was small enough to be low latency, but big enough to be accurate, that we reached the point where like the median new user loved Co-pilot and wouldn't stop using it. That took four months, five months, of just tinkering and sort of exploring. There were other dead ends that we had along the way. And then it became quite obvious that it was good because we had hundreds of internal users who were GitHub engineers. And I remember the first time I looked at the retention numbers, they were extremely high. It was like 60 plus percent after 30 days from first install. If you installed it, the chance that you were still using it after 30 is over 60 percent. And it's a very intrusive product. It's sort of always popping UI up and so if you don't like it, you will disable it. Indeed, 40 something percent of people did disable it but those are very high retention numbers for like an alpha first version of a product that you're using all day. Then I was just incredibly excited to launch it. And it's improved dramatically since then. 

Dwarkesh Patel 

Okay. Sounds very similar to the Gmail story, right? It's an incredibly valuable inside and then maybe it was obvious that it needs to go outside. We'll go back to the AI stuff in a second. But some more GitHub questions. By what point, if ever, will GitHub Profiles replace resumes for programmers? 

Nat Friedman 

That's a good question. I think they're a contributing element to how people try to understand a person now. But I don't think they're a definitive resume. We introduced readme’s on profiles when I was there and I was excited about that because I thought it gave people some degree of personalization. Many thousands of people have done that. Yeah, I don't know. There's forces that push in the other direction too on that one where people don't want their activity and skills to be as legible. And there may be some adverse selection as well where the people with the most elite skills, it's rather gauche for them to signal their competence on their profile. There's some weird social dynamics that feed into it too. But I will say I think it effectively has this role for people who are breaking through today. One of the best ways to break through. I know many people who are in this situation. You were born in Argentina. You're a very sharp person but you didn't grow up in a highly connected or prosperous network, family, et cetera. And yet you know you're really capable and you just want to get connected to the most elite part communities in the world. If you're good at programming, you can join open source communities and contribute to them. And you can very quickly accrete a global reputation for your talent, which is legible to many companies and individuals around the world. And suddenly you find yourself getting a job and moving maybe to the US or maybe not moving. You end up at a great start up. I mean, I know a lot of people who deliberately pursued the strategy of building reputation in open source and then got the sail up and the wind catches you and you've got a career. I think it plays that role in that sense. But in other communities like in machine learning research, this is not how it works. There's a thousand people, the reputation is more on Arxiv than it is on GitHub. I don't know if it'll ever be comprehensive.

Are there any other industries for which proof of work of this kind will eat more into the way in which people are hired? I think there's a labor market dynamic in software where the really high quality talent is so in demand and the supply is so much less than the demand that it shifts power onto the developers such that they can require of their employers that they be allowed to work in public. And then when they do that, they develop an external reputation which is this asset they can port between companies. If the labor market dynamics weren't like that, if programming well were less economically valuable, companies wouldn't let them do that. They wouldn't let them publish a bunch of stuff publicly and they'd say that's a rule. And that used to be the case, in fact. As software has become more valuable, the leverage of a single super talented developer has gone up and they've been able to demand over the last several decades the ability to work in public. And I think that's not going away. 

Dwarkesh Patel 

Other than that, I mean, we talked about this a little bit, but what has been the impact of developers being more empowered in organizations, even ones that are not traditionally IT organizations? 

Nat Friedman 

Yeah. I mean, software is kind of magic, right? You can write a for loop and do something a lot of times. And when you build large organizations at scale, one of the things that does surprise you is the degree to which you need to systematize the behavior of the people who are working. When I first was starting companies and building sales teams, I had this wrong idea coming from the world as a programmer that salespeople were hyper aggressive, hyper entrepreneurial, making promises to the customer that the product wouldn't do, and that the main challenge you had with salespeople was like restraining them from going out and aggressively cutting deals that shouldn't be cut. What I discovered is that while it does exist sometimes, the much more common case is that you need to build a systematic sales playbook, which is almost a script that you run on your sales team, where your sales reps know the processing to follow to like exercise this repeatable sales motion and get a deal closed. I just had bad ideas there. I didn't know that that was how the world worked, but software is a way to systematize and scale out a valuable process extremely efficiently. I think the more digitized the world has become, the more valuable software becomes, and the more valuable the developers who can create it become. 

Dwarkesh Patel 

Would 25-year-old Nat be surprised with how well open source worked and how pervasive it is? 

Nat Friedman 

Yeah, I think that's true. I think we all have this image when we're young that these institutions are these implacable edifices that are evil and all powerful and are able to substantially orchestrate the world with master plans. Sometimes that is a little bit true, but they're very vulnerable to these new ideas and new forces and new communications media and stuff like that. Right now I think our institutions overall look relatively weak. And certainly they're weaker than I thought they were back then. Honestly, I thought Microsoft could stop open source. I thought that was a possibility. They can do some patent move and there's a master plan to ring fence open source in. And, you know, that didn't end up in the case. 

In fact when Microsoft bought GitHub, we pledged all of our patent portfolio to open source. That was one of the things that we did as part of it. That was a poetic moment for me, having been on the other side of patent discussions in the past, to be a part and be instrumental in Microsoft making that pledge. That was quite crazy. 

Dwarkesh Patel 

Oh, that's really interesting. It wasn't that there was some business or strategic reason. More so it was just like an idea whose time had come. 

Nat Friedman 

Well, GitHub had made such a pledge. And so I think in part of acquiring GitHub, we had to either try to annul that pledge or sign up to it ourselves. And so there was sort of a moment of a forced choice. But everyone at Microsoft thought it was a good idea too. So in many senses it was a moment whose time had come and the GitHub acquisition was a forcing function.

Dwarkesh Patel 

What do you make of critics of modern open source like Richard Stallman or people who advocate for free software saying that – Well, corporations might advocate for open source because of practical reasons for getting good code. And the real way the software should be made – it should be free and that you can replicate it, you can change it, you can modify it and you can completely view it. And the ethical values about that should be more important than the practical values. What do you make of that critique? 

Nat Friedman 

I think those are the things that he wants and the thing that maybe he hasn't updated is that maybe not everyone else wants that. He has this idea that people want freedom from the tyranny of a proprietary intellectual property license. But what people really want is freedom from having to configure their graphics card or sound driver or something like that. They want their computer to work. There are places where freedom is really valuable. But there's always this thing of – I have a prescriptive ideology that I'd like to impose on the world versus this thing of – I will try to develop the best observational model for what people actually want whether I want them to want it or not. And I think Richard is strongly in the former camp.

Dwarkesh Patel 

What is the most underrated license by the way? 

Nat Friedman 

I don't know. Maybe the MIT license is still underrated because it's just so simple and bare. 

Dwarkesh Patel 

Nadia Eghbal had a book recently where she argued that the key constraint on open source software and on the time of the people who maintain it is the community aspect of software. They have to deal with feature requests and discussions and maintaining for different platforms and things like that. And it wasn't the actual code itself, but rather this sort of extracurricular aspect that was the main constraint. Do you think that is the constraint for open source software? How do you see what is holding back more open source software?

Nat Friedman 

By and large I would say that there is not a problem. Meaning open source software continues to be developed, continues to be broadly used. And there's areas where it works better and areas where it works less well, but it's sort of winning in all the areas where large-scale coordination and editorial control are not necessary. It tends to be great at infrastructure, stand-alone components and very, very horizontal things like operating systems. And it tends to be worse at user experiences and things where you need a sort of dictatorial aesthetic or an editorial control. I've had debates with Dylan Field of Figma, as to why it is that we don't have lots of good open source applications. And I've always thought it had something to do with this governance dynamic of – Gosh, it's such a pain to coordinate with tons of people who all sort of feel like they have a right to try to push the project in one way or another. Whereas in a hierarchical corporation there can be a head of this product or CEO or founder or designer who just says, we're doing it this way. And you can really align things in one direction very, very easily. Dylan has argued to me that it might be because there's just fewer designers, people with good design sense, in open source. I think that might be a contributing factor too, but I think it's still mostly the governance thing. And I think that's what Nadia's pointing at also. You're running a project and you gave it to people for free. For some reason, giving people something for free creates a sense of entitlement. And then they feel like they have the right to demand your time and push things around and give you input and you want to be polite and it's very draining. So I think that where that coordination burden is lower is where open source tends to succeed more. And probably software and other new forms of governance can improve that and expand the territory that open source can succeed in. 

Dwarkesh Patel 

Yeah. Theoretically those two things are consistent, right? You could have very tight control over governance while the code itself is open source. 

Nat Friedman 

And this happens in programming languages. Languages are eventually set in stone and then advanced by committee. But yeah, certainly you have these benign dictators of languages who enforce the strong set of ideas they have, a vision, master plan. That would be the argument that's most on Dylan's side. Hey, it works for languages why can't it work for end user applications? I think the thing you need to do though to build a good end user application is not only have a good aesthetic and idea, but somehow establish a tight feedback loop with a set of users. Where you can give them – Dwarkesh, try this. Oh my gosh. Okay, that's not what you need. Doing that is so hard, even in a company where you've total hierarchical control of the team in theory and everyone really wants the same thing and everyone's salary and stock options depend on the product being accepted by these users. It still fails many times in that scenario. Then additionally doing that in the context of open source, it's just slightly too hard. 

Dwarkesh Patel 

The reason you acquired GitHub, as you said, is that there seems to be complementarity between Microsoft’s and GitHub's missions. And I guess that's been proven out over the last few years. Should there be more of these collaborations and acquisitions? Should there be more tech conglomerates? Would that be good for the system? 

Nat Friedman 

I don't know if it's good but yes, it is certainly efficient in many ways. I think we are seeing a collaboration occur because the math is sort of pretty simple. If you are a large company and you have a lot of customers, then the thing that you've achieved is this very expensive and difficult thing of building distribution and relationships with lots of customers. And that is as hard or harder and takes longer and more money than just inventing the product in the first place. So if you can then go and just buy the product for a small amount of money and make it available to all of your customers, then there's often an immediate, really obvious gain from doing that. And so in that sense, like acquisitions make a ton of sense. And I've been surprised that the large companies haven't done many more acquisitions in the past until I got into a big company and started trying to do acquisitions. I saw that there are strong elements of the internal dynamics to make it hard. It's easier to spend $100 million on employees internally to do a project than to spend $100 million to buy a company. The dollars are treated differently. The approval processes are different. The cultural buy-in processes are different. And then to the point of the discussion we had earlier, many acquisitions do fail. And when an acquisition fails, it's somehow louder and more embarrassing than when some new product effort you've spun up doesn't quite work out as well. I think there's lots of internal reasons, some justified and some less so, that they haven't been doing it. But just from an economic point of view, it seemed like it makes sense to see more acquisitions than we've seen. 

Dwarkesh Patel 

Well, why did you leave? 

Nat Friedman 

As much as I loved Microsoft, and certainly as much as I loved GitHub. I still feel tremendous love for GitHub and everything that it means to the people who use it. I didn't really want to be a part of a giant company anymore. Building CoPilot was an example of this. It wouldn't have been possible without OpenAI and Microsoft and GitHub, but building it also required navigating this really large group of people between Microsoft and OpenAI and GitHub. And you reach a point where you're spending a ton of time on just navigating and coordinating lots of people. I just find that less energizing. Just my enthusiasm for that was not as high. I was torn about it because I truly love GitHub, the product and there was so much more I still knew we could do but I was proud of what we'd done. I miss the team and I miss working on GitHub. It was really an honor for me but it was time for me to go do something. I was always a startup guy. I always liked small teams, and I wanted to go back to a smaller, more nimble environment. 

Nat.org 

Dwarkesh Patel 

Okay, so we'll get to it in a second. But first, I want to ask about nat.org and the list of 300 words there. Which I think is one of the most interesting and very straussian list of 300 words I've seen anywhere. I'm just going to mention some of these and get some of your commentary. You should probably work on raising the ceiling, not the floor. Why? 

Nat Friedman 

First, I say probably. But what does it mean to raise the ceiling or the floor? I just observed a lot of projects that set out to raise the floor. Meaning – Gosh. We are fine, but they are not and we need to go help them with our superior prosperity and understanding of their situation. Many of those projects fail. For example, there were a lot of attempts to bring the internet to Africa by large and wealthy tech companies and American universities. I won't say they all had no effect, that's not true, but many of them were far short of successful. There were satellites, there were balloons, there were high altitude drones, there were mesh networks, laptops, that were pursued by all these companies. And by the way, by perfectly well-meaning, incredibly talented people who in some cases did see some success, but overall probably much less than they ever hoped. But if you go to Africa, there is internet now. And the way the internet got there is the technologies that we developed to raise the ceiling in the richest part of the world, which were cell phones and cell towers. In the movie Wall Street from the 80s, he's got that gigantic brick cell phone. That thing cost like 10 grand at the time. That was a ceiling raising technology. It eventually went down the learning curve and became cheap. And the cell towers and cell phones, eventually we've got now hundreds of millions or billions of them in Africa. It was sort of that initially ceiling raising technology and then the sort of force of capitalism that made it work in the end. It was not any Deus Ex Machina technology solution that was intended to kind of raise the floor. There's something about that that's not just an incidental example. But on my website, I say probably. Because there are some examples where people set out to kind of raise the floor and say – No one should ever die of smallpox again. No one should ever die of guinea worm again. And they succeed. I wouldn't want to discourage that from happening but on balance, we have too many attempts to do that. They look good, feel good, sound good, and don't matter. And in some cases, have the opposite of the effect they intend to. 

Dwarkesh Patel 

Here's another one and this is under the EMH section. In many cases, it's more accurate to model the world as 500 people than 8 billion. Now here's my question, what are the 8 billion minus 500 people doing? Why are there only 500 people? 

Nat Friedman 

I don't know exactly. It's a good question. I ask people that a lot. The more I've done in life, the more I've been mystified by this – Oh, somebody must be doing X. And then you hear there's a few people doing X, then you look into it, they're not actually doing X. They're doing kind of some version of it that's not that. All the best moments in life occur when you find something that to you is totally obvious that clearly somebody must be doing, but no one is doing. Mark Zuckerberg says this about founding Facebook. Surely the big companies will eventually do this and create this social and identity layer on the internet. Microsoft will do this. But no, none of them were. And he did it. So what are they doing? I think the first thing is that many people throughout the world are optimizing local conditions. They're working in their town, their community, they're doing something there so the set of people that are kind of thinking about kind of global conditions is just naturally narrowed by the structure of the economy. That's number one. I think number two is, most people really are quite mimetic. We all are, including me. We get a lot of ideas from other people. Our ideas are not our own. We kind of got them from somebody else. It's kind of copy paste. You have to work really hard not to do that and to be decorrelated. And I think this is even more true today because of the internet. I don't know if Albert Einstein, as a patent clerk, wouldn't he have just been on Twitter just getting the same ideas as everybody else? What do you have as decorrelated ideas? I think the internet has correlated us more. The exception would be really disagreeable people who are just naturally disagreeable. So I think the future belongs to the autists in some sense because they don't care what other people think as much. Those of us on the spectrum in any sense are in that category. Then we have this belief that the world's efficient and it isn't and that's part of it. The other thing is that the world is so fractal and so interesting. Herculaneum papyri, right? It is this corner of the world that I find totally fascinating but I don't have any anticipation that eight billion people should be thinking about that. That should be a priority for everyone. 

Dwarkesh Patel 

Okay, here's another one. Large scale engineering projects are more soluble in IQ than they appear. And here's my question, does that make you think that the impact of AI tools like co-pilot will be bigger or smaller because one way to look at co-pilot is it’s IQ is probably less than the average engineer, so maybe it'll have less impact. 

Nat Friedman 

Yeah, but it definitely increases the productivity of the average engineer to bring them higher up. And I think it increases the productivity of the best engineers as well. Certainly a lot of people I consider to be the best engineers telling you that they find it increases their productivity a lot. It's really interesting how so much of what's happened in AI has been soft, fictional work. You have Midjourney, you have copywriting, you have Claude from Anthropic is so literary, it writes poetry so well. Except for co-pilot, which is this real hard area where like, the code has to compile, has to be syntactically corrected, has to work and pass the tests. We see the steady improvement curve where now, already on average, more than half of the code is written by co-pilot. I think when it shipped, it was like low 20s. And so it's really improved a lot as the models have gotten better and the prompting has gotten better. But I don't see any reason why that won't be 95%. It seems very likely to me. I don't know what that world looks like. It seems like we might have more special purpose and less general purpose software. Right now we use general purpose tools like spreadsheets and things like this a lot, but part of that has to do with the cost of creating software. And so once you have much cheaper software, do you create more special purpose software? That's a possibility. So every company, just a custom piece of code. Maybe that's the kind of future we're headed towards. So yeah, I think we're going to see enormous amounts of change in software development. 

Dwarkesh Patel 

Another one – The cultural prohibition on micromanagement is harmful, great individuals should be fully empowered to exercise their judgment. And the rebuttal to this is if you micromanage you prevent people from learning and to develop their own judgment. 

Nat Friedman 

So imagine you go into some company, they hired Dwarkesh and you do a great job with the first project that they give you. Everyone's really impressed. Man, Dwarkesh, he made the right decisions, he worked really hard, he figured out exactly what needed to be done and he did it extremely well. Over time you get promoted into positions of greater authority and the reason the company's doing this is they want you to do that again, but at bigger scale, right? Do it again, but 10 times bigger. The whole product instead of part of the product or 10 products instead of one. The company is telling you, you have great judgment and we want you to exercise that at a greater scale. Meanwhile, the culture is telling you as you get promoted, you should suspend your judgment more and more and defer your judgment to your team. And so there's some equilibrium there and I think we're just out of equilibrium right now where the cultural prohibition is too strong. I don't know if this is true or not, but maybe in the 80s I would have felt the other side of this. That we have too much micromanagement. I think the other problem that people have is that they don't like micromanagement because they don't want bad managers to micromanage, right? So you have some bad managers, they have no expertise in the area, they're just people managers and they're starting to micromanage something that they don't understand where their judgment is bad. And my answer to that is stop empowering bad managers. Don't have them, promote and empower people who have great judgment and do understand the subject matter that they're working on. If I work for you and I just know you have better judgment and you come in and you say, now like you're launching the scroll thing and you think you've got the final format wrong, here's how you should do it, I would welcome that even though it's micromanagement because it's going to make us more successful in them and learn something from tha. I know your judgment is better than mine in this case or at least we're going to have a conversation about it, we're both going to get smarter. So I think on balance, yeah, there are cases where people have excellent judgment and we should encourage them to exercise it and sometimes, things will go wrong when you do that, but on balance you will get far more excellence out of it and we should empower individuals who have great judgment. 

Dwarkesh Patel 

Yeah. There's a quote about Napoleon that if he could have been in every single theater of every single battle he was part of, that he would have never lost a battle. I was talking to somebody who worked with you at GitHub and she emphasized to me, and this is like really remarkable to me, that even the applications are already being shipped out to engineers how much of the actual suggestions and the actual design came from you directly, which is kind of remarkable to me that as CEO you would have. 

Nat Friedman 

Yeah, you can probably also find people you can talk to who think that was terrible. But the question is always: does that scale? And the answer is it does not scale. The experience that I had as CEO was I was terrified all the time that there was someone in the company who really knew exactly what to do and had excellent judgment, but because of cultural forces that person wasn't empowered. That person was not allowed to exercise their judgment and make decisions. And so when I would think and talk about this, that was the fear that it was coming from. They were in some consensus environment where their good ideas were getting whittled down by lots of conversations with other people and a politeness and a desire not to micromanage. So we were ending up with some kind of average thing. And I would rather have more high variance outcomes where you either get something that's excellent because it is the expressed vision of a really good auteur or you get a disaster and it didn't work and now you know it didn't work and you can start over. I would rather have those more high variance outcomes and I think it's a worthy trade.

Dwarkesh Patel 

Okay, let's talk about AI. What percentage of the economy is basically text to text? 

Nat Friedman 

Yeah, it's a good question. We've done the sort of Bureau of Labor Statistics analysis of this. It's not the majority of the economy or anything like that. We're in the low double digit percentages. The thing that I think is hard to predict is what happens over time as the cost of text to text goes down? I don't know what that's going to do. But yeah, there's plenty of revenue to be got now. One way you can think about it is – Okay, we have all these benchmarks for machine learning models. There's LAMBADA and there's this and there's that. Those are really only useful and only exist because we haven't deployed the models at scale. So we don't have a sense of what they're actually good at. The best metric would probably be something like – What percentage of economic tasks can they do? Or on a gig marketplace like Upwork, for example, what fraction of Upwork jobs can GPT-4 do? I think is sort of an interesting question. My guess is extremely low right now, autonomously. But over time, it will grow. And then the question is, what does that do for Upwork? I’m guessing it’s a five billion dollar GMV marketplace, something like that. Does it grow? Does it become 15 billion or 50 billion? Does it shrink because the cost of text to text tasks goes down? I don't know. My bet would be that we find more and more ways to use text to text to advance progress. So overall, there's a lot more demand for it. I guess we'll see. 

Dwarkesh Patel 

At what point does that happen? GPT-3 has been a sort of rounding error in terms of overall economic impact. Does that happen with GPT-4, GPT-5, where we see billions of dollars of usage? 

Nat Friedman 

Yeah, I've got early access to GPT-4 and I've gotten to use it a lot. And I honestly can't tell you the answer to that because it's so hard to discover what these things can do that the prior ones couldn't do. I was just talking to someone last night who told me – Oh, GPT-4 is actually really good at Korean and Japanese and GPT-3 is much worse at those. So it's actually a real step change for those languages. And people didn't know how good GPT-3 was until it got instruction tuned for chatGPT and was put out in that format. You can imagine the pre-trained models as a kind of unrefined crude oil and then once they've been kind of RLHF and trained and then put out into the world, people can find the value. 

Dwarkesh Patel 

What part of the AI narrative is wrong in the over-optimistic direction? 

Nat Friedman 

Probably an over-optimistic case from both the people who are fearful of what will happen, and from people who are expecting great economic benefits is that we're definitely in this realm of diminishing returns from scale. For example GPT-4 is, my guess is, two orders of magnitude more expensive to train the GPT-3, but clearly not two orders of magnitude more capable. Now is it two orders of magnitude more economically valuable? That would also surprise me. When you're in these sigmoids, where you are going up this exponential and then you start to asymptote it, it can be difficult to tell if that's going to happen. The idea that we might not run into hard problems or that scaling will continue to be worth it on a dollar basis are reasons to be a little bit more pessimistic than the people who have high certainty of GDP increasing by 50% per month which I think some people are predicting. But on the whole, I'm very optimistic. You're asking me to like make the bear case for something I'm very bullish about. 

Dwarkesh Patel 

No, that's why I asked you to make the bear case because I know about you. I want to ask you about these foundation models. What is the stable equilibrium you think of how many of them will there be? Will it be an oligopoly like Uber and Lyft where…? 

Nat Friedman 

I think there will probably be wide-scale proliferation. And if you asked me, what are the structural forces that are pro proliferation and the structural forces that are pro concentration? I think the pro proliferation case is a bit stronger. The pro proliferation case is – They're actually not that hard to train. The best practices will promulgate. You can write them down on a couple sheets of paper. And to the extent that secrets are developed that improve training, those are relatively simple and they get copied around easily. Number one, number two. The data is mostly public, it's mostly data from the internet. Number three, the hardware is mostly commodity and the hardware is improving quickly and getting much more efficient. I think some of these labs potentially have 50, 100, 200 percent training efficiency improvement techniques and so there's just a lot of low-hanging fruit on the technique side of things. We're seeing it happen. I mean, it's happening this weekend, it's happening this year. We're getting a lot of proliferation. The only case against proliferation is that you'll get concentration because of training costs. And I don't know if that's true. 

I don't have confidence that the trillion dollar model will be much more valuable than the 100 billion dollar model and that even it will be necessary to spend a trillion dollars training it. Maybe there will be so many techniques available for improving efficiency. How much are you willing to spend on researchers to find techniques if you're willing to spend a trillion on training? That's a lot of bounties for new techniques and some smart people are going to take those bounties.

Dwarkesh Patel 

How different will these models be? Will it just be sort of everybody chasing the same exact marginal improvement leading to the same marginal capabilities or will they have entirely different repertoire of skills and abilities? 

Nat Friedman 

Right now, back to the mimetic point, they're all pretty similar. Basically the same rough techniques. What's happened is an alien substance has landed on Earth and we are trying to figure out what we can build with it and we're in this multiple overhangs. We have a compute overhang where there's much more compute in the world than is currently being used to train models like much, much more. I think the biggest models are trained on maybe 10,000 GPUs, but there's millions of GPUs. And then we have a capability and technique overhang where there's lots of good ideas that are coming out and we haven't figured out how best to assemble them all together, but that's just a matter of time kind of until people do that. And because many of those capabilities are in the hands of the labs, they haven't reached the tinkerers of the world. I think that is where the new – What can this thing actually do? Until you get your hands on it, you don't really know. I think OpenAI themselves were surprised by how explosively chat GPT has grown. I don't think they put chatGPT out expecting that to be the big announcement. I think they thought GPT-4 was going to be their big announcement. Iit still probably is and will be big, but the chatGPT really surprised them. It's hard to predict what people will do with it and what they'll find valuable and what works. So you need tinkerers. So it goes from hardware to researchers to tinkerers to products. That's the pipe, that's the cascade. 

Dwarkesh Patel 

When I was scheduling my interview with Ilya, it was originally supposed to be around the time that chatGPT came out and so their comm’s person tells me – Listen, just so you know, this interview would be scheduled around the time. We're going to make a minor announcement. It's not the thing you're thinking, it's not GPT-4, but it's just like a minor thing. They didn't expect what it ended up being. 

Have incumbents gotten smarter than before? It seems like Microsoft was able to integrate this new technology role. 

Nat Friedman 

There's two, there's been two really big shifts in the way incumbents behave in the last 20 years that I've seen. The first is, it used to be that incumbents got disrupted by startups all the time. You have example after example of this in the mini-computer, micro-computer era, et cetera. And then Clay Christensen wrote The Innovator's Dilemma. And I think what happened was that everyone read it and they said – Oh, disruption is this thing that occurs and we have this innovator's dilemma where we get disrupted because the new thing is cheaper and we can't let that happen. And they became determined not to let that happen and they mostly learned how to avoid it. They learned that you have to be willing to do some cannibalization and you have to be willing to set up separate sales channels for the new thing and so forth. We've had a lot of stability in incumbents for the last 15 years or so. I think that's maybe why. That's my theory. So that's the first major step change. And then the second one is – man, they are paying a ton of attention to AI. If you look at the prior platform revolutions like cloud, mobile, internet, web, PC, all the incumbents derided the new platform and said – Gosh, like no one's going to use web apps. Everyone will use full desktop apps, rich applications. And so there was always this laughing at the new thing. The iPhones were laughed at by incumbents and that is not happening at all with AI. We may be at peak hype cycle and we're going to enter the trough of despair. I don't think so though, I think people are taking it seriously and every live player CEO is adopting it aggressively in their company. So yeah, I think incumbents have gotten smarter. 

Questions from Twitter

Dwarkesh Patel 

All right. So let me ask you some questions that we got from Twitter. This is former guest and I guess mutual friend Austin Vernon. Nat is one of those people that seems unreasonably effective. What parts of that are innate and what did he have to learn? 

Nat Friedman 

It's very nice of Austin to say. I don't know. We talked a little bit about this before, but I think I just have a high willingness to try things and get caught up in new projects and then I don't want to stop doing it. I think I just have a relatively low activation energy to try something and am willing to sort of impulsively jump into stuff and many of those things don't work, but enough of them do that I've been able to accomplish a few things. The other thing I would say, to be honest with you, is that I do not consider myself accomplished or successful. My self-image is that I haven't really done anything of tremendous consequence and I don't feel like I have this giant bed of achievements that I can go to sleep on every night. I think that's truly how I feel. I'm an insecure overachiever, I don't really feel good about myself unless I'm doing good work, but I also have tried to cultivate a forward-looking view where I try not to be incredibly nostalgic about the past. 

I don't keep lots of trophies or anything like that. Go into some people's offices and it's like things on the wall and trophies of all the things they've accomplished and I'd always seemed really icky to me. Just had a sort of revulsion to that. 

Dwarkesh Patel 

Is that why you took down your blog? 

Nat Friedman 

Yeah. I just wanted to move forward. 

Dwarkesh Patel 

Simian asks for your takes on alignment. “He seems to invest both in capabilities and alignment which is the best move under a very small set of beliefs.” So he's curious to hear the reasoning there. 

Nat Friedman 

I guess we'll see but I'm not sure capabilities and alignment end up being these opposing forces. It may be that the capabilities are very important for alignment. Maybe alignment is very important for capabilities. I think a lot of people believe, and I think I'm included in this, that AI can have tremendous benefits, but that there's like a small chance of really bad outcomes. Maybe some people think it's a large chance. The solutions, if they exist, are likely to be technical. There's probably some combination of technical and prescriptive. It's probably a piece of code and a readme file. It says – if you want to build aligned AIs, use this code and don't do this or something like that. I think that's really important and more people should try to actually build technical solutions. I think one of the big things that's missing that perplexes me is, there's no open source technical alignment community. There's no one actually just implementing in open source, the best alignment tools. There's a lot of philosophizing and talking, and then there's a lot of behind closed doors, interpretability and alignment work. Because the alignment people have this belief that they shouldn't release their work I think we're going to end up in a world where there's a lot of open source, pure capabilities work, and no open source alignment work for a little while. Hopefully that'll change. So yeah, I wanted to, on the margin, invest in people doing alignment. It seems like that's important. I thought Sydney was a kind of an example of this. You had Microsoft essentially released an unaligned AI and I think the world sort of said – Hmm, sort of threatening its users, that seems a little bit strange. If Microsoft can't put a leash on this thing, who can? I think there'll be more interest in it and I hope there's open communities. 

Dwarkesh Patel 

That was so endearing for some reason. Threatening you just made it so much more lovable for some reason. 

Nat Friedman 

Yeah, I think it's like the only reason it wasn't scary is because it wasn't hooked up to anything. If it was hooked up to HR systems or if it could like post jobs or something like that, then I don't know, like to get on a gig worker site or something. I think it could have been scary. 

Dwarkesh Patel 

Yep. Final question from Twitter. Will asks “What historical personality seems like the most kindred spirit to you”. Bookshelves are all around us in this room, some of them are biographies. Is there one that sticks out to you?

Nat Friedman 

Gosh, good question. I think I'd say it's changed over time. I've been reading Philodemus's work recently. When I grew up Richard Feynman was the character who was curious and plain spoken. 

Dwarkesh Patel 

What's next? You said that according to your perception that you still have more accomplishments ahead of you. What does that look like, concretely? Do you know yet? 

Nat Friedman 

I don't know. It's a good question. The area I'm paying most attention to is AI. I think we finally have people building the products and that's going to just accelerate. I'm going to pay attention to AI and look for areas where I can contribute. 

Dwarkesh Patel 

Awesome. Okay. Nat this was a true pleasure. Thanks for coming on the podcast. 

Nat Friedman 

Thanks for having me. 

Discussion about this podcast