15 Comments

FOUR HOURS?!?

Well, folks... it finally happened. I finally found a reason to use AI to summarize something for me.

Expand full comment

Honestly this feels like he (and other intelligence explosion people) has a huge blind spot. The exponential is *one way* AGI *could* happen but it requires the log linear graph to continue without interruption.

With this logic, we would’ve already had “the big earthquake” a few times. There’s nothing assuring scaling to keep working, it’s one narrow path. Feels sort of like smart people have been brainwashed to believe this.

It’s a good thought experiment, but it’s certainly not proven reality.

Expand full comment

I can feel the AGI and all that, things are coming, the paths are just much more broader than people discuss and state.

Expand full comment

Absolutely. Saying advanced models of the future will be built on LLMs alone is like saying you can cut out the language center of someone's brain and you have their entire being. The other parts of future AI "minds" are just not quite as far along in development and usefulness yet.

Expand full comment

It's for sure not proven. The future is always uncertain. But the historical patterns are very clear, and it's been in place for a long time. Specifically, the key is that we don't need to extrapolate decades forward to get to when generalized human comparable capabilities might materialize. There's nothing assuring scaling to keep working-yes, but there's also no reason why it should stop working suddenly, and there's been no sign of such slowdown at all.

Expand full comment

why do you say "when generalized human comparable capabilities might materialize"? I don't think we know what generalized human comparable capabilities are?

Expand full comment

Yeah, we don’t know. They like to use the term AGI. My point is: Leopold is here saying AGI is possible in 2027, not 2077.

Expand full comment

My critique: the bois have seen some dots on a graph, drew a nice straight line through them, and even though the final dots aren’t there yet, they’re convinced they will be. We could also drop a few marbles on the floor and neatly drape a string across them, and claim to know by the length of the remaining string where the rest of the marbles shall fall. Or, perhaps, we can put our strings and scribbles wherever we like, yet just admit that we don’t know where the marbles will fail, because we have no good account of the alleged causality behind the pattern.

Expand full comment

I didn't get the whole way through the interview, but I'm very skeptical of Leopold's views.

> Six months ago, 10 GW was the talk of the town. Now, people have moved on. 10 GW is happening. There’s The Information report on OpenAI and Microsoft planning a $100 billion cluster.

This sounds very miscalibrated for two reasons.

1) Datacenters and power plants are very complicated pieces of infrastructure. You need various kinds state approval and geological surveys and civil engineering contractors and so on, which mean you need a full time operations team running for several years. At the scale we're talking about, you start needing to buy first-of-a-kind power plant hardware that has to first be custom engineered. Even the ~$100mm datacenters at my workplace require a full time team and take years to build out. (Also re: the later point that you can buy up power-hungry aluminium smelters in structural decline, I agree, except by a sort of efficient markets argument, why hasn't this already been done for previous datacenters? What changes now? I feel like there's a Chesterton's fence here.)

2) Reading a report from The Information about $100bn of capex and taking it at face value is very questionable. That's multiple times Microsoft's annual capex budget; if they do spend that much there will be signs of it that Wall St analysts will start seeing many months in advance.

> For the average knowledge worker, it’s a few hours of productivity a month. You have to be expecting pretty lame AI progress to not hit a few hours of productivity a month.

I think very few knowledge workers would pay $100/mo not just because it's a huge amount, but because of differentiated pricing: the marginal value of the $100 model isn't enough above the $10 model for most individuals to justify.

That said I think if these models get good enough we will see a lot of enterprise / site licenses for LLMs that could go up to this price, because an employer is willing to pay more for worker productivity than workers. But I wouldn't be surprised to see a lot of the more valuable contracts go to wrapper LLMs run by LexisNexis and Elsevier affiliates and the likes, because competition can commoditise LLMs leaving the producer surplus flowing to the IP owners.

But taking a step back, it feels weird to me to assume that you'd raise copilot prices to fund $100bn in capex. If you need $100bn that bad just save it up or sell some bonds or take a GPU-secured loan from a consortium of banks; there is no principled reason to risk losing the copilot market by raising prices too early.

> The question is, when does the CCP and when does the American national security establishment realize that superintelligence is going to be absolutely decisive for national power? This is where the intelligence explosion stuff comes in, which we should talk about later.

Neither establishment is asleep at the wheel in this particular case. Obama called "Superintelligence" by Bostrom one of his favourite books 10 years ago, and with the amount Americans have been publicly fearmongering about Chinese LLMs you can bet it's a common conversation topic in Beijing. Rather I think the apparent lack of action is just because nobody is quite sure what to do with this situation, as it's so hard to forecast. What concretely would you have politicians do? Disclaimer: I know very little about China, but I have studied Chinese history and live in Hong Kong.

> There are reports, I think Microsoft. We'll get into it.

The press release linked to on the word "reports" discusses G42, which as far as I can tell is using Azure cloud compute, and which as far as I can tell is an "AI" consulting company. I could be wrong though - the chair of G42 is famously the UAE's top spy, and I don't know what to make of that. But I worked for an LLM research lab in SF for a while, so I think my BS radar is reasonably well calibrated.

> My primary argument is that if you’re at the point where this thing has vastly superhuman capabilities — it can develop crazy bioweapons targeted to kill everyone but the Han Chinese, it can wipe out entire countries, it can build robo armies and drone swarms with mosquito-sized drones — the US national security state will be intimately involved.

What the actual #$%(&?

I realise these are just hypotheticals, but the fact that CCP ethnic bioweapons are a salient idea indicates to me that Leopold should read a book or two about Chinese history. Of course I can't prove that nobody in Beijing wants this, but it conflicts so sharply with my understanding of the PRC state that I can't help but call BS.

Expand full comment

> What the actual #$%(&?

Note that he said "can develop", not "Chinese are trying to develop". The "US national security state" won't base its actions on the CCP's or Xi's actual intentions, but on its own perception of the worst-case scenario of those intentions.

I think that in the age of AGI they will mostly just want to continue to try to defend Taiwan and push back against Chinese expansionism―but as someone who watches serpentza, it's clear that Chinese nationalism and ethnic pride are big in China and that Xi decided to treat the US as an enemy. I expect such things will weigh heavily on US military planners. On top of that there should be worries about how ASI would extrapolate Xi Jinping Thought into geopolitical action.

Expand full comment

Thanks for sharing. I made it to 1hr 26 mins and then I had to bail.

I am a huge fan of long form content but this one felt "too meandering" and two friends sitting and chatting about conversations they had had before. It didn't feel that fresh and it also felt too influenced by recent film history. Sorry Dwarkesh this one didn't advance my understanding of this space.

Expand full comment

This is funny how optimistic he is. And this is the first time I hear about the chat as a coworker (integrated into your work environment in a way that you can assign a task in Slack and wait while the code and tests are written and pass CI). This is definitely a new threat.

Expand full comment

Very interesting conversation, thank you 💚 🥃

Expand full comment

Leo's argument that AGI will result in some form of full/partial nationalization by the US government does sound reasonable and somewhat inevitable. Knowing that, why would firms like Open AI / Microsoft / Google / Anthropic etc spend $ on anything other than commercial applications with near/medium term ROI? It would be hard for them to justify going to AGI without a commercial ROI case, no? Hence the entire argument of the trillion $ cluster is self defeating

Expand full comment

How much did he use ChatGPT to write his manifesto? When former employees keep spouting their marketing, it makes you wonder. Some of them are clearly young and impressionable researchers.

Expand full comment