31 Comments

If I'm being frank, I think this is ridiculous, and scary. Efficiency is not everything. I'm probably going to sound like a Luddite here, but AI should be used for scientific & material advancement, not to create mega-Sundars. The Silicon Valley tech narrative of efficiency seems dystopian. It also ignores how much of the modern world is a result of consumer demand. So many innovations, it is because there was consumer demand. When you replace a Google engineer, you might increase Google's bottom-line, but you also replace a source of demand for rest of the economy. And considering Google is dependent on advertising revenue from other corporations, Google is indirectly hurting its own bottom-line too. Sure a world with AI firms might distribute the surplus back to human beings, but where does demand even come from this world?, and if we can find a way, do we want this? Work also provides meaning, lessons to us, as human being. It grounds us as human beings. It's what makes us human. And the world you are describing does not have a place for most humans. Maybe this happens, but I hope it doesn't. We need norms to deal with this.

Expand full comment

> Work also provides meaning, lessons to us, as human being.

Yes, but so does simulated work. Or games, or sports, or exploration.

This still necessarily exists in a world where humans exist. And likely in a more fulfilling and evenly distributed form. It just doesn't drive the intergalactic economy.

Expand full comment

As a shareholder of Alphabet, I welcome mega-Sundar and his copies. But it really raises the question: What are most of us going to do with our time?

Are governments going to be ready to deal with such transformative change? It's kind of insane how little discussion there is on creating frameworks for different AGI scenarios. I guess everyone is too busy building AI and/or implementing it in their day-to-day lives.

Expand full comment

I'm pretty skeptical that the average government employee is at all aware of this stuff. Maybe some NatSec types are but most government employees seem too siloed to be aware of any of this.

Expand full comment

I think that very few people in the world are extrapolating advanced AI scenarios. There still hasn't been a tipping point or shock event (Think, how fast the world closed in March 2020) nor there might be one. It might feel progressive enough that one day the US government might wake up and see that UR is at 10% and realize why.

If the transformation is very slow (Say, it takes a generation to replace everyone in the service economy), then it might not affect society much. With Total Fertility Rates where they are, we might see enough of a shift of future generations from a service economy to a post scarcity economy that is a combination of UBI + Entertainment economy (Think professional athletes) + a tiny portion of humans in the service + manufacturing + agricultural economies.

If the transformation is fast (say 20 years or less), we will probably go through a period of serious societal unrest.

The following paper kind of lays out a scenario where humans are disempowered by AI:

https://gradual-disempowerment.ai

Expand full comment

If or when a large share of the population doesn’t have work, something like UBI will become a lot more feasible politically.

Expand full comment

I am working as computer electronics hardware engineer in a 300k-employee corporation. I have 30 years of experience with software coding as well. We have MS-Copilot and an internal chat-GPT system in place which allows us using the AI even for confidential data.

And I like reading sci-fi. And I am just running deepseek 14b on my home pc under linux.

With that said, based on my experience, I find this article as convincing as the second coming of Jesus should happen tomorrow. (No insult meant to Christians.)

I just can't get over the idea that some computer system should ever work so well as to viably replace humans doing non-trivial jobs. I just can't get it. Sorry.

Expand full comment

there's a guy who wrote a whole book about a similar scenario

Expand full comment

First chapter of life 3.0?

Expand full comment

Who & what book

Expand full comment

robin hanson age of em

Expand full comment

I don't see why, in this scenario, Sundar would remain on top. If we reach the point where agents can do everything else, then those agents will necessarily require all sorts of capabilities that allow them to interact with, navigate, and reason about the world. They will need a certain level of independence and autonomy.

This would also mean that intelligence has advanced to such a degree that it dwarfs Sundar. If the intelligence surpasses Sundar, a firm would be better off without him at the top.

In this case, AIs would be running the firm itself.

Are we supposed to believe that AIs running firms will still be part of a system ultimately controlled by humans? That doesn't seem right.

Firms—composed of an amalgamation of agents that can interface with humans, explore the world to gather necessary information, conduct advanced planning, deploy other agents, and negotiate—are somehow supposed to be governed by humans in a world that humans still control?

I don’t see where, in this scenario, humans remain in charge of anything.

During the Industrial Revolution, machines extended human capabilities. Humans simply moved one level up—they regulated and guided the actuators. The person tilling the field became the one driving the tractor.

In this scenario, however, as soon as a tractor-equivalent AI is created, the machine becomes a better and more efficient tractor driver. You will always lose the race to the machine. This is replacement, not extension.

And to top it off! A government with decrepit leadership, which could barely handle COVID, is supposed to "do something"?

Expand full comment

I think the incentives are actually against mega-firms of AI agents, and you're missing compute costs. If you do some napkin calculations to figure out how many AI agents you can run 24/7 on all the US AI clusters now (or Stargate), it's on the order of low-mid tens of millions effective labor force. This just isn't that much relative to what is sketched out here, which means AI firms will be compute bottlenecked even after the capability exists to run them.

Therefore, firms will have to make the calculation: if I try and do a large complex project and I want to spend $X on it, I can either do it with 1 agent, costing $X worth of tokens for that one agent, or I can split $X across 2 (or n) agents, where *each agent now has to use 1/2 the tokens* to do it in budget but it is 2x faster. And this assumes no communication costs between agents *and* that you get significant gains from having multiple agents work on the problem!

My guess is that inference will typically be fast enough (especially given that AI workers run 24/7) that you are cost dominated rather than time dominated, which means you'd lean towards picking the single agent because token communication/coordination costs between agents are >0.

Expand full comment

Yeah, this post basically assumes infinite revenue. Perhaps economic growth and more efficient compute gets us there some day, but it definitely doesn’t describe early AI-first firms, where as you say there will be cost constraints. People will also still be part of the equation since AI won’t be reliable enough to just run unsupervised. It will be interesting to see whose jobs mostly disappear and whose get augmented with AI.

Expand full comment

While I appreciate the educated guess, I find it difficult to imagine what an AI firm would be like, because so much (as you say) of what human firms are, are in reaction to various traits of humans. And, I guess, non-AI software systems.

Changing the underlying substrate of firms to AI from humans would seem to change far more than I could predict. Would "firm" even be the right term for it?

Expand full comment

In a fully automated economy, where would demand for products and services come from? Who are the consumers? Back in 2009, I read Martin Ford's groundbreaking book, "The Lights in the Tunnel." It offers a vivid thought experiment about what AI job automation would mean for the economy and how capitalism could be preserved in that scenario. The book is now available as a free download at MFordFuture.com . Highly recommended and thought provoking!

Expand full comment

Another reason I suppose prokaryotes haven’t evolved much is that simpler genomes are less prone to mutation.

In your metaphor, a “genetic mutation” can be thought of as a bet taken a the firm on a new feature or product launch. Apple launching the iPhone was a really important mutation to its DNA that allowed it to scale massively.

Just as fortunate mutations are what are allowed prokaryotes to eventually scale up to multicellular eukaryotes.

But the less DNA you have, the fewer genetic “bets” you are able to place.

Expand full comment

The constraint is actually with the phenotype.

Expand full comment

This post is similar to Life 3.0 chapter 1

Expand full comment

A more interesting question is what will humans do? We are not going to be able to retrain or repurpose our way out of this one folks.

Expand full comment

So, when is the date when this miraculous AGI super-Sundar comes out?

Expand full comment

Refreshing, though provoking. You made me think. Will AI create a new Mag Seven? Each of the current leaders grew from a single disruption that never stopped growing.

As for the Super Sindar, let's rethink. Great leaders inspire brilliant people with OTHER ideas, and different ways of thinking to achieve greatness. Cloning the boss seems fraught. This is the challenge, balancing efficiency with inefficient debate and deliberation.

Expand full comment

I think you could apply this to politics as well. Hundreds of robots figuring out how to maximize vote shares from humans, able to operate on shoestring budgets so they don’t need to fundraisers nearly as much. Maybe you still need a human politician at the center to actually take the role, but why have staff, consultants?

Expand full comment

I'm the type of mf that likes Dwarkesh posts before reading them because I already know they're gonna be fire 🔥

Expand full comment
9hEdited

What is preventing the AI agent population from getting "locked in" and stagnating? Each firm duplicates the "best" models, thus have low agent variance, thus have slow evolution.

How would this ecosystem create and discover the *new best* models?

Would science today be better if it was just a 400-year swarm of Isaac Newtons? Would John von Neumann get excluded?

Would business today be better if every company for the last 200 years was made of Cornelius Vanderbilt clones?

If every startup since 2010 used the "best" model, they would all consist of Steve Jobs clones. Would Tesla and SpaceX have been built?

Expand full comment