Dwarkesh Podcast
Dwarkesh Podcast
Sam Bankman-Fried - Crypto, Altruism, and Leadership
0:00
Current time: 0:00 / Total time: -45:16
-45:16

Sam Bankman-Fried - Crypto, Altruism, and Leadership

Infiltrating tradfi, giving away $100m this year, hiring A-players

I flew to the Bahamas to interview Sam Bankman-Fried, the CEO of FTX! He talks about FTX’s plan to infiltrate traditional finance, giving $100m this year to AI + pandemic risk, scaling slowly + hiring A-players, and much more.

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.

Episode website + Transcript here.

Follow me on Twitter for updates on future episodes

Subscribe to find out about future episodes!

Timestamps

(00:18) - How inefficient is the world?

(01:11) - Choosing a career

(04:15) - The difficulty of being a founder

(06:21) - Is effective altruism too narrowminded?

(09:57) - Political giving

(12:55) - FTX Future Fund

(16:41) - Adverse selection in philanthropy

(18:06) - Correlation between different causes

(22:15) - Great founders do difficult things

(25:51) - Pitcher fatigue and the importance of focus

(28:30) - How SBF identifies talent

(31:09) - Why scaling too fast kills companies

(33:51) - The future of crypto

(35:46) - Risk, efficiency, and human discretion in derivatives

(41:00) - Jane Street vs FTX

(41:56) - Conflict of interest between broker and exchange

(42:59) - Bahamas and Charter Cities

(43:47) - SBF’s RAM-skewed mind

Unfortunately, audio quality abruptly drops from 17:50-19:15

Transcript

Dwarkesh Patel 0:09

Today on The Lunar Science Society Podcast, I have the pleasure of interviewing Sam Bankman-Fried, CEO of FTX. Thanks for coming on The Lunar Society.

Sam Bankman-Fried 0:17

Thanks for having me.

How inefficient is the world?

Dwarkesh Patel 0:18

Alright, first question. Does the consecutive success of FTX and Alameda suggest to you that the world has all kinds of low-hanging opportunities? Or was that a property of the inefficiencies of crypto markets at one particular point in history?

Sam Bankman-Fried 0:31

I think it's more of the former, there are just a lot of inefficiencies.

Dwarkesh Patel 0:35

So then another part of the question is: if you had to restart earning to give again, what are the odds you become a billionaire, but you can't do it in crypto?

Sam Bankman-Fried 0:42

I think they're pretty decent. A lot of it depends on what I ended up choosing and how aggressive I end up deciding to be. There were a lot of safe and secure career paths before me that definitely would not have ended there. But if I dedicated myself to starting up some businesses, there would have been a pretty decent chance of it.

Choosing a career

Dwarkesh Patel 1:11

So that leads to the next question—which is that you've cited Will MacAskill's lunch with you while you were at MIT as being very important in deciding your career. He suggested you earn-to-give by going to a quant firm like Jane Street. In retrospect, given the success you've had as a founder, was that maybe bad advice? And maybe you should’ve been advised to start a startup or nonprofit?

Sam Bankman-Fried 1:31

I don't think it was literally the best possible advice because this was in 2012. Starting a crypto exchange then would have been…. I think it was definitely helpful advice. Relative to not having gotten advice at all, I think it helps quite a bit.

Dwarkesh Patel 1:50

Right. But then there's a broader question: are people like you who could become founders advised to take lower variance, lower risk careers that in, expected value, are less valuable?

Sam Bankman-Fried 2:02

Yeah, I think that's probably true. I think people are advised too strongly to go down safe career paths. But I think it's worth noting that there's a big difference between what makes sense altruistically and personally for this. To the extent you're just thinking of personal criteria, that's going to argue heavily in favor of a safer career path because you have much more quickly declining marginal utility of money than the world does. So, this kind of path is specifically for altruistically-minded people.

The other thing is that when you think about advising people, I think people will often try and reference career advice that others got. “What were some of these outward-facing factors of success that you can see?” But often the answer has something to do with them and their family, friends, or something much more personal. When we talk with people about their careers, personal considerations and the advice of people close to them weigh very heavily on the decisions they end up making.

Dwarkesh Patel 3:17

I didn't realize that the personal considerations were as important in your case as the advice you got.

Sam Bankman-Fried 3:24

Oh, I don’t think in my case. But, it is true with many people that I talked to.

Dwarkesh Patel 3:29

Speaking of declining marginal consumption, I'm wondering if you think the implication of this is that over the long term, all the richest people in the world will be utilitarian philanthropists because they don't have diminishing returns of consumption. They’re risk-neutral.

Sam Bankman-Fried 3:40

I wouldn't say all will, but I think there probably is something in that direction. People who are looking at how they can help the world are going to end up being disproportionately represented amongst the most and maybe least successful.

The difficulty of being a founder

Dwarkesh Patel 3:54

Alright, let’s talk about Effective Altruism. So in your interview with Tyler Cowen, you were asked, “What constrains the number of altruistically minded projects?” And you answered, “Probably someone who can start something.”

Now, is this a property of the world in general? Or is this a property of EAs? And if it's about EAs, then is there something about the movement that drives away people who took could take leadership roles?

Sam Bankman-Fried 4:15

Oh, I think it's just the world in general. Even if you ignore altruistic projects and just look at profit-minded ones, we have lots of ideas for businesses that we think would probably do well, if they were run well, that we'd be excited to fund. And the missing ingredient quite frequently for them is the right person or team to take the lead on it. In general, starting something is brutal. It's brutal being a founder, and it requires a somewhat specific but extensive list of skills. Those things end up making it high in demand.

Dwarkesh Patel 4:56

What would it take to get more of those kinds of people to go into EA?

Sam Bankman-Fried 4:59

Part of it is probably just talking with them about, “Have you thought about what you can do for the world? Have you thought about how you can have an impact on the world? Have you thought about how you can maximize your impact on the world?” Many people would be excited about thinking critically and ambitiously about how they can help the world. So I think honestly, just engagement is one piece of this. And then even within people who are altruistically minded and thinking about what it would take for them to be founders, there are still things that you can do.

Some of this is about empowering people and some of this is about normalizing the fact that when you start something, it might fail—and that's okay. Most startups and especially very early-stage startups should not be trying to maximize the chances of having at least a little bit of success. But that means you have to be okay with the personal fallout of failing and that we have to build a community that is okay with that. I don't think we have that right now, I think very few communities do.

Is effective altruism too narrowminded?

Dwarkesh Patel 6:21

Now, there are many good objections to utilitarianism, as you know. You said yourself that we don't have a good account of infinite ethics—should we attribute substantial weight to the probability that utilitarianism is wrong? And how do you hedge for this moral uncertainty in your giving?

Sam Bankman-Fried 6:35

So I don't think it has a super large impact on my giving. Partially, because you'd need to have a concrete proposal for what else you would do that would be different actions-wise—and I don't know that that I've been compelled by many of those. I do think that there are a lot of things we don't understand right now. And one thing that you pointed to is infinite ethics. Another thing is that (I'm not sure this is moral uncertainty, this might be physical uncertainty) there are a lot of sort of chains of reasoning people will go down that are somewhat contingent on our current understanding of the universe—which might not be right. And if you look at expected-value outcomes, might not be right.

Say what you will about the size of the universe and what that implies, but some of the same people make arguments based on how big the universe is and also think the simulation hypothesis has decent probability. Very few people chain through, “What would that imply?” I don't think it's clear what any of this implies. If I had to say, “How have these considerations changed my thoughts on what to do?”

The honest answer is that they have changed it a little bit. And the direction that they pointed me in is things with moderately more robust impact. And what I mean by that is, I'm sure one way that you can calculate the expected value of an action is, “Here's what's going to happen. Here are the two outcomes, and here are the probabilities of them.” Another thing you can do is say - it's a little bit more hand-wavy - but, “How much better is this going to make the world? How much does it matter if the world is better in generic diffuse ways?” Typically, EA has been pretty skeptical of that second line of reasoning—and I think correctly. When you see that deployed, it's nonsense. Usually, when people are pretty hard to nail down on the specific reasoning of why they think that something might be good, it’s because they haven't thought that hard about it or don't want to think that hard about it. The much better analyzed and vetted pathways are the ones we should be paying attention to.

That being said, I do think that sometimes EA gets too narrow-minded and specific about plotting out courses of impact. And this is one of the reasons why that people end up fixating on one particular understanding of the universe, of ethics, of how things are going to progress. But, all of these things have some amount of uncertainty in them. And when you jostle them, some theories of impact behave somewhat robustly and some of them completely fall apart. I’ve become a bit more sympathetic to ones that are a little robust under thoughts about what the world ends up looking like.

Political giving

Dwarkesh Patel 9:57

In the May 2022 Oregon Congressional Election, you gave 12 million dollars to Carrick Flynn, whose campaign was ultimately unsuccessful. How have you updated your beliefs about the efficacy of political giving in the aftermath?

Sam Bankman-Fried 10:12

It was the first time that I gave on that scale in a race. And I did it because he was, of all the candidates in the cycle, the most outspoken on the need for more pandemic preparedness and prevention. He lost—such is life. In the end, there are some updates on the efficacy of various things. But, I never thought that the odds were extremely high that he was going to win. It was always going to be an uncertain close race. There's a limit to how much you can update from a one-time occurrence. If you thought the odds were 50-50, and it turns out to be close in one direction or another, there's a maximum of a factor-of-two update that you have on that. There were a bunch of sort of micro-updates on specific factors of the race, but on a high level, it didn’t change my perspective on policy that much.

Dwarkesh Patel 11:23

But does it make you think there are diminishing or possibly negative marginal returns from one donor giving to a candidate? Because of the negative PR?

Sam Bankman-Fried 11:30

At some point, I think that's probably true.

Dwarkesh Patel 11:33

Continuing on the theme of politics, when is it more effective to give the marginal million dollars to a political campaign or institution to make some change at the government level (like putting in early detection)? Or when is it more effective to fund it yourself?

Sam Bankman-Fried 11:47

It's a good question. It's not necessarily mutually exclusive. One thing worth looking at is the scale of the things that need to happen. How much are things like international cooperation important for it? When you look at pandemic prevention, we're talking tens of billions of dollars of scale necessary to start putting this infrastructure in place. So it's a pretty big scale thing—which is hard to fund to that level individually. It’s also something where we’re going to need to have cooperation between different countries on, for example, what their surveillance for new pathogens looks like. And vaccine distribution If some countries have a great distribution of vaccines and others don't, that's not good. It's both not fair and not equitable for the countries that get hit hardest. But also, in a global pandemic, it's going to spread. You need global coverage. That's another reason that government has to be involved, at least to some extent, in the efforts.

FTX Future Fund

Dwarkesh Patel 12:55

Let's talk about Future Fund. As you know, there are already many existing Effective Altruist organizations that do donations. What is the reason you thought there was more value in creating a new one? What's your edge?

Sam Bankman-Fried 13:06 There's value in having multiple organizations. Every organization has its blind spots, and you can help cover those up if you have a few. If OpenPhil didn't exist, maybe we would have created an organization that looks more like OpenPhil. They are covering a lot of what we’re looking at—we're looking at overlapping, but not identical things. I think having that diversity can be valuable, but pointing to the ways in which we intentionally designed to be a little bit different from existing donors:

One thing that I've been really happy about is the re-granting program. We have a number of people who are experts in various areas to who we've basically donated pots that they can re-grant. What are the reasons that we think this is valuable? One thing is giving more stakeholders a chance to voice their opinions because we can't possibly be listening to everyone in the world directly and integrating all those opinions to come up with a perfect set of answers. Distributing it and letting them act semi-autonomously can help with that. The other thing is that it helps with a large number of smaller grants. When you think about what an organization giving away $100 million in a year is thinking about, “if we divided that up into $25,000 grants, how many grants would that mean?” 4,000 grants to analyze, right? If we want to give real thought to each one of those, we can't do that.

But on the flip side, sometimes the smaller grants are the most impactful per dollar and there are a lot of cases where someone really impressive has an exciting idea for a new foundation or a new organization that could do a lot of good for the world and needs $25,000 to get started. To rent out a small office, to be able to cover salaries for two employees for the first six months. Those are the kind of cases where a pretty small grant can make a huge change in the development of what might ultimately become a really impactful organization. But they're the kind of things that are really hard for our team to evaluate all of, just given the number of them—but the re-grantor program gives us a way to do that. Instead, we have 10, 50, or 100 re-grantors, who are going out and finding a lot of those opportunities close to them, they can then identify those and direct those grants—and it gives us a much wider reach. It also biases it less towards people who we happen to know, which is good.

We don't want to just like overfund everyone we know and underfund everyone that we don’t. That's one initiative that I've been pretty excited about that we're going to keep doing. Another thing we've really tried to have a lot of emphasis on making the (application) process smooth and clean. There are pros and cons to this. But it drops the activation energy necessary for someone to decide to apply for a grant and fill out all of the forms. We’ve really tried to bring more people into the fold.

Adverse selection in philanthropy

Dwarkesh Patel 16:41

If you make it easy for people to fill out your application and generally fund things that other organizations wouldn't, how do you deal with the possibility of adverse selection in your philanthropic deal flow?

Sam Bankman-Fried 16:52

It's a really good question. It’s a worry that Bob down the street might see a great book case study that he wants and wonder if he can get funding for this bookcase as it’s going to house a lot of knowledge. Knowledge is good, right? Obviously, we would detect that pretty quickly. The basic answer is that we still vet all of these. We do have oversight of them. But, we also do a deep dive into both all of the large ones, but also into samplings of all the small ones. We do deep dives into randomly sampled subsets of them—which allows us to get a good statistical sense of whether we are facing significant adverse selection in them. So far, we haven't seen obvious signs of it, but we're going to keep doing these analyses and see if anything worrying comes out of those. But that's a way to be able to have more trusted analyses for more scaled-up numbers of grants.

Correlation between different causes

Dwarkesh Patel 18:06

A long time ago, you wrote a blog post about how EA causes are multiplicative, instead of additive. Do you still find that's the case with most of the causes you care about? Or are there cases where some of the causes you care about are negatively multiplicative? An example might be economic growth and the speed at which AI takes off.

Sam Bankman-Fried 18:24

Yeah, I think it’s getting more complicated. Specifically around AI, you have a lot of really complex factors that can point in the same direction or in opposite directions. Especially if what you think matters is something like the relative progress of AI safety research versus AI capabilities research, a lot of things are going to have the same impact on both of those, and thus confusing impact on safety as a whole.

I do think it's more complicated now. It's not cleanly things just multiplying with each other. There are lots of cases where you see multiplicative behavior, but there are cases where you don't have that. The conclusion of this is: if you have multiplicative cases, you want to be funding each piece of it. But if you don't, then you want to be trained to identify the most impactful pieces and move those along. Our behavior should be different in those two scenarios.

Dwarkesh Patel 19:23

If you think of your philanthropy from a portfolio perspective, is correlation good or bad?

Sam Bankman-Fried 19:29

Expected value is expected value, right? Let's pretend that there is one person in Bangladesh and another one in Mexico. We have two interventions, both 50-50 on saving each of their lives. Suppose there’s some new drug that we could release to combat a neglected disease. This question is asking, “are they correlated?” “Are these two drugs correlated in their efficacy?” And my basic argument is, “it doesn't matter, right?” If you think about it from each of their perspectives, the person in Mexico isn't saying, “I only want to be saved in the cases where the person in Bangladesh is or isn't saved.” That’s not relevant. They want to live.

The person in Bangladesh similarly wishes to live. You want to help both of them as much as you can. It's not super relevant whether there’s alignment or anti-alignment between the cases where you get lucky and the ones where you don't.

Dwarkesh Patel 20:46

What’s the most likely reason that Future Fund fails to live up to your expectations?

Sam Bankman-Fried 20:51

We get a little lame. We give to a lot of decent things. But all the cooler or more innovative things that we do, don't seem to work very well. We end up giving the same that everyone else is giving. We don’t turn out to be effective at starting new things, we don't turn out to be effective at thinking of new causes or executing them. Hopefully, we'll avoid that. But, it's always a risk.

Dwarkesh Patel 21:21

Should I think of your charitable giving, as a yearly contribution of a billion dollars? Or should I think of it as a $30 billion hedge against the possibility that there's going to be some existential risk that requires a large pool of liquid wealth?

Sam Bankman-Fried 21:36

It's a really good question, I'm not sure. We've given away about 100 million so far this year. We're going to start doing that because we think there are really important things to fund and to start scaling up those systems. We notice opportunities as they come and we have systems ready in place to give to them. But it's something we're really actively discussing internally—how concentrated versus diffuse we want that giving to be, and storing up for one very large opportunity versus a mixture of many.

Great founders do difficult things

Dwarkesh Patel 22:15

When you look at a proposal and think this project could be promising, but this is not the right person to lead it, what is the trait that's most often missing?

Sam Bankman-Fried 22:22

Super interesting. I am going to ignore the obvious answer which is that the guy is not very good and look at cases where it's someone pretty impressive, but not the right fit for this. There are a few things. One of them is how much are they going to want to deal with really messy shit. This is a huge thing! When I was working at Jane Street, I had a great time there. One thing I didn’t realize was valuable until I saw the alternative—if I decided that is a good trade to buy one share of Apple stock on NASDAQ, there's a button to do that.

If you as a random citizen want to buy one share of Apple stock directly on an exchange, it'll cost you tens of millions of dollars a year to get set up. You have to get a physical colo(cation) in Secaucus, New Jersey, have market data agreements with these companies, think about the sip and about the NBBO and whether you’re even allowed to list on NASDAQ, and then build the technological infrastructure to do it. But all of that comes after you get a bank account.

Getting a bank account that's going to work in finance is really hard. I spent hundreds, if not thousands of hours of my life, trying to open bank accounts. One of the things at early Alameda that was really crucial to our ability to make money was having someone very senior spend hours per day in a physical bank branch, manually instructing wire transfers. If we didn't do that, we wouldn't have been able to do the trade.

When you start a company, there are enormous amounts of shit that looks like that. Things that are dumb or annoying or broken or unfair, or not how the world should work. But that’s how the world does work. The only way to be successful is to fight through that. If you're going to be like, “I'm the CEO, I don't do that stuff,” then no one's going to do that at your company. It's not going to get done. You won't have a bank account and you won't be able to operate. One of the biggest traits that are incredibly important for a founder and for an early team at a company (but not important for everything in life) is willing to do a ton of grunt work if it’s important for the company right then.

Viewing it not as “low prestige” or “too easy” for you, but as, “This is the important thing. This is a valuable thing to do. So it's what I'm going to do.” That's one of the core traits. The other thing is asking if they’re excited about this idea? Will they actually put their heart and soul into it? Or are they going to be not really into it and half-ass? Those are two things that I really look for.

Pitcher fatigue and the importance of focus

Dwarkesh Patel 25:51

How have you used your insights about pitcher fatigue to allocate talent in your companies?

Sam Bankman-Fried 25:58

Haha. When it comes to pitchers, in baseball, there's a lot of evidence that they get worse over the course of the game. Partially, because it's hard on the arm. But, it's worth noting that the evidence seems to support the claim that it depends on the pitchers. But in general, you're better off breaking up your outings. It's not just a function of how many innings they pitch that season, but also extremely recently. If you could choose between someone throwing six innings every six days, or throwing three innings every three days, you should use the latter. That's going to get the better pitching on average, and just as many innings out of them—and baseball has since then moved very far in that direction. The average number of pitches thrown by starting pitchers has gone down a lot over the last 5-10 years.

How do I use that in my company? There’s a metaphor here except this is with computer work instead of physical arm work. You don't have the same effect where your arm is getting sore, your muscles snap, and you need surgery if you pitch too hard for too long. That doesn't directly translate—but there's an equivalent of this with people getting tired and exhausted. But on the other hand, context is a huge, huge piece of being effective. Having all the context in your mind of what's going on, what you're working on, and what the company is doing makes it easier to operate effectively. For instance, if you could have either two half-time employees or one full-time employee, you're way better off with one full-time employee because they're going to have more context than either of the part-time employees would have—thus be able to work way more efficiently.

In general, concentrated work is pretty valuable. If you keep breaking up your work, you're never going to do as great of work as if you truly dove into something.

How SBF identifies talent

Dwarkesh Patel 28:30

You've talked about how you weigh experience relatively little when you're deciding who to hire. But in a recent Twitter thread, you mentioned that being able to provide mentorship to all the people who you hire is one of the bottlenecks to you being able to scale. Is there a trade-off here where if you don't hire people for experience, you have to give them more mentorship and thus can't scale as fast?

Sam Bankman-Fried 28:51

It's a good question. To a surprising extent, we found that the experience of the people that we hire has not had much correlation with how much mentorship they need. Much more important is how they think, how good they are at understanding new and different situations, and how hard they try to integrate into their understanding of coding how FTX works. We actually have by and large found that other things are much better predictors of how much oversight and mentorship they’re going to need then.

Dwarkesh Patel 29:35

How do you assess that short of hiring them for a month and then seeing how they did?

Sam Bankman-Fried 29:39

It's tough, I don't think we're perfect at it. But things that we look at are, “Do they understand quickly what the goal of a product is? How does that inform how they build it?” When you're looking at developers, I think we want people who can understand what FTX is, how it works, and thus what the right way to architect things would be for that rather than treating it as an abstract engineering problem divorced from the ultimate product.

You can ask people like, “Hey, here's a high-level customer experience or customer goal. How would you architect a system to create that?” That’s one thing that we look for. An eagerness to learn and adapt. It's not trivial to ask for that. But you can do some amount of that by giving people novel scenarios and seeing how much they break versus how much they bend. That can be super valuable. Specifically searching for developers who are willing to deal with messy scenarios rather than wanting a pristine world to work in. Our company is customer-facing and has to face some third-party tooling. All those things mean that we have to interface with things that are messy and the way the world is.

Why scaling too fast kills companies

Dwarkesh Patel 31:09

Before you launched FTX, you gave detailed instructions to the existing exchanges about how to improve their system, how to remove clawbacks, and so on. Looking back, they left billions of dollars of value on the table. Why didn't they just fix what you told them to fix?

Sam Bankman-Fried 31:22

My sense is that it’s part of a larger phenomenon. One piece of this is that they didn't have a lot of market structure experts. They did not have the talent in-house to think really deeply about risk engines. Also, there are cultural barriers between myself and some of them, which meant that they were less inclined than they otherwise would have been to take it very seriously. Ignoring those factors, there's something much bigger at play there. Many of these exchanges had hired a lot of people and they got in very large. You might think they were more capable of doing things with more horsepower. But in practice, most of the time that we see a company grow really fast, really quickly, and get really big in terms of people, it becomes an absolute mess.

Internally, there's huge diffusion of responsibility issues. No one's really taking charge. You can't figure out who's supposed to do what. In the end, nothing gets done. You actually start hitting the negative marginal utility of employees pretty quickly. The more people you have, the less total you get done. That happened to a number of them to the point where I sent them these proposals. Where did they go internally? Who knows. The Vice President of Exchange Risk Operations (but not the real one—the fake one operating under some department with an unclear goal and mission) had no idea what to do with it. Eventually, she passes it off to a random friend of hers that was the developer for the mobile app and was like, “You're a computer person, is this right?” They likely said, “I don’t know, I'm not a risk person,” and that's how it died. I’m not saying that’s literally what happened but sounds kinda like that’s probably happened. It's not like they had people who took responsibility and thought, “Wow, this is scary. I should make sure that the best person in the company gets this,” and pass it to the person who thinks about their risk modeling. I don't think that's what happened.

The future of crypto

Dwarkesh Patel 33:51

There're two ways of thinking about the impact of crypto on financial innovation. One is the crypto maximalist view that crypto subsumes tradfi. The other is that you're basically stress-testing some ideas in a volatile, fairly unregulated market that you're actually going to bring to tradfi, but this is not going to lead to some sort of decentralized utopia. Which of these models is more correct? Or is there a third model that you think is the correct one?

Sam Bankman-Fried 34:18

Who knows exactly what's going to happen? It's going to be path-dependent. If I had to guess I would say that a lot of properties of what is happening crypto today will make their way into Trad Fi to some extent. I think blockchain settlement has a lot of value and can clean up a lot of areas of traditional market structure. Composable applications are super valuable and are going to get more important over time. In some areas of this, it's not clear what's going to happen. When you think about how decentralized ecosystems and regulation intersect, it's a little TBD exactly where that ends up.

I don't want to state with extreme confidence exactly what will or won't happen. Stablecoins becoming an important settlement mechanism is pretty likely. Blockchains in general becoming a settlement mechanism, collateral clearing mechanism, and more assets getting tokenized seem likely. There being programs written on blockchains that people can add to that can compose with each other seems pretty likely to me. A lot of other areas of it could go either way.

Risk, efficiency, and human discretion in derivatives

Dwarkesh Patel 35:46

Let's talk about your proposal to the CFTC to replace Futures Commission Merchants with algorithmic real-time risk management. There's a worry that without human discretion, you have algorithms that will cause liquidation cascades when they were not necessary. Is there some role for human discretion in these kinds of situations?

Sam Bankman-Fried 36:06

There is! The way that traditional future market structure works is you have a clearinghouse with a decent amount of manual discretion in it connected to FCMs. Some of which use human discretion, and some of which use automated risk management algorithms with their clients. The smaller the client, the more automated it is. We are inverting that where at the center, you have an automated clearing house. Then, you connect it to FCM, which could use discretionary systems when managing their clients.

The key difference here is that one way or another, the initial margin has to end up at the clearinghouse. A programmatic amount of it and the clearinghouse acts in a clear way. The goal of this is to prevent contagion between different intermediaries. Whatever credit decisions one intermediary makes, with respect to their customers, doesn't pose risk to other intermediaries. This is because someone has to post the collateral to the clearinghouse in the end—whether it's the FCM, their customer, or someone else. It gives clear rules of the road and lack of systemic risk spreading throughout the system and contains risk to the parties that choose to take that risk on - to the FCMs that choose to make credit decisions there.

There is a potential role for manual judgment. Manual judgment can be valuable and add a lot of economic value. But it can also be very risky when done poorly. In the current system, each FCM is exposed to all of the manual bespoke decisions that each other FCM is making. That's a really scary place to be in, we've seen it blow up. We saw it blow up with LME nickel contracts and with a few very large traders who had positions at a number of different banks that ended up blowing out. So, this provides a level of clarity, oversight, and transparency to this system, so people know what risk they are, or are not taking on.

Dwarkesh Patel 38:29

Are you replacing that risk with another risk? If there's one exchange that has the most liquidity om futures and there’s one exchange where you're posting all your collateral (across all your positions), then the risk is that that single algorithm the exchange is using will determine when and if liquidation cascades happen?

Sam Bankman-Fried 38:47

It’s already the case that if you put all of your collateral with a prime broker, whatever that prime broker decides (whether it's an algorithm or a human or something in between) is what happens with all of your collateral. If you're not comfortable with that, you could choose to spread it out between different venues. You could choose to use one venue for some products and another venue for other products. If you don't want to cross-collateralized cross-margin your positions, you get capital efficiency for cross-margining them—for putting them in the same place. But, the downside of that is the risk of one can affect the other. There's a balance there, and I don't think it's a binary thing.

Dwarkesh Patel 39:28

Given the benefits of cross-margining and the fact that less capital has to be locked up as collateral, is the long-run equilibrium that the single exchange will win? And if that's the case, then, in the long run, there won't be that much competition in derivatives?

Sam Bankman-Fried 39:40

I don't think we're going to have a single exchange winning. Among other things, there are going to be different decisions made by different exchanges—which will be better or worse for particular situations. One thing that people have brought up is, “How about physical commodities?” Like corn or soy? What would our risk model say about that? It's not super helpful for those commodities right now because it doesn't know how to understand a warehouse. So, you might want to use a different exchange, which had a more bespoke risk model that tried to understand how the human would understand what physical positions someone had on. That would totally make sense. That can cause a split between different exchanges.

In addition, we've been talking about the clearing house here, but many exchanges can connect to the same clearinghouse. We're already, as a clearing house, connected to a number of different DCMs and excited for that to grow. In general, there are going to be a lot of people who have different preferences over different details of the system and choose different products based on that. That's how it should work. People should be allowed to choose the option that makes the most sense for them.

Jane Street vs FTX

Dwarkesh Patel 41:00

What are the biggest differences in culture between Jane Street and FTX?

Sam Bankman-Fried 41:05

FTX has much more of a culture of like morphing and taking out a lot of random new shit. I don’t want to say Jane Street is an ossified place or anything, it’s somewhat nimble. But it is more of a culture of, “We're going to be very good at this particular thing on a timescale of a decade.” There are some cases where that's true of FTX because some things are clearly part of our core business for a decade. But there are other things that we knew nothing about a year ago, and now have to get good at. There's been more adaptation and it's also a much more public-facing and customer-facing business than Jane Street is—which means that there are lots of things like PR that are much more central to what we're doing.

Conflict of interest between broker and exchange

Dwarkesh Patel 41:56

Now in crypto, you're combining the exchange and the broker—they seem to have different incentives. The exchange wants to increase volume, and the broker wants to better manage risk, maybe with less leverage. Do you feel that in the long run, these two can stay in the same entity given the potential conflict of interest?

Sam Bankman-Fried 42:13

I think so. There's some extent to which they differ, but more that they actually want the same thing—and harmonizing them can be really valuable. One is to provide a great customer experience. When you have two different entities with two completely different businesses but have to go from one to the other, you're going to end up getting the least common denominator of the two as a customer. Everything is going to be supported as poorly as whichever of the two entities support what you're doing most poorly - and that makes it harder. Whereas synchronizing them gives us more ability to provide a great experience.

Bahamas and Charter Cities

Dwarkesh Patel 42:59

How has living in the Bahamas impacted your opinion about the possibility of successful charter cities?

Sam Bankman-Fried 43:06

It's a good question. It's the first time and it’s updated positively. We've built out a lot of things here that have been impactful. It's made me feel like it is more doable than I previously would have thought. But it's a lot of work. It's a large-scale project if you want to build out a full city—and we haven’t built out a full city yet. We built out some specific pieces of infrastructure that we needed and we've gotten a ton of support from the country. They've been very welcoming, and there are a lot of great things here. This is way less of a project than taking a giant, empty plot of land, and creating a city in it. That's way harder.

SBF’s RAM-skewed mind

Dwarkesh Patel 43:47

How has having a RAM-skewed mind influence the culture of FTX and its growth?

Sam Bankman-Fried 43:52

On the upside, we've been pretty good at adapting and understanding what the important things are at any time. Training ourselves quickly to be good at those even if it looks very different than what we were doing. That's allowed us to focus a lot on the product, regulation, licensing, customer experience, branding, and a bunch of other things. Hopefully, it means that we're able to take whatever situations come up and provide reasonable feedback about them and reasonable thoughts on what to do rather than thinking more rigidly in terms of how previous situations were. On the flip side, I need to have a lot of people around me who will try and remember long-term important things that might get lost day-to-day. As we focus on things that pop up, it's important for me to take time periodically to step back and clear my mind and remember the big picture. What are the most important things for us to be focusing on?

Please share if you enjoyed this episode! Helps out a ton!

Share

Discussion about this podcast