> We’re talking about something that is potentially more powerful than any human
What makes you claim that? AI doomers and safetyists are wrong assuming that the current LLM/RL paradigm will exponentially accelerate into something: creative, autonomous, disobedient and open-ended. The "AGI via Scaling Laws" claim is flawed for similar reasons (see my critique here: https://scalingknowledge.substack.com/i/124877999/scaling-laws).
What exactly makes you think that? He writes that it will "augment human intelligence" (listing many examples of augmentation), not that we'll have a superhuman intelligence.
1) Arguing about whether AI will be in human control can be confusing because the idea of control is a philosophical quagmire. Personally, I think it is easier to think in terms of whether AI could have substantial negative unintended/unforeseen consequences than whether AI could be outside of human control.
2) We already have superhuman AI, it is just superhuman in narrow domains, such as Go and protein folding.
3) AI doesn't have to cause singularity to have substantial negative unintended/unforeseen consequences. For example:
a) AlphaFold could be used to make bioweapons that cause great harm, even though I don't believe it was the intention of the DeepMind researchers to create bioweapons.
b) AI trading algorithms could inadvertently distort capital allocation in ways that substantially negatively impact the economy, without the makers of the trading algorithms intending that.
c) AI could make it easy to mass manufacture cheap autonomous weapons systems that cause mass casualties or are used by tyrants to consolidate power over their population.
4) I don't think that alignment necessarily helps with any of the above three examples, but I do think they demonstrate examples in which the AI is substantially more powerful than any *individual* human.
5) The above examples are perhaps somewhat similar to the atomic bomb: the atomic bomb is *in some ways* controlled by humans, but is also arguably more powerful than any human. Even if you are a human who has access to "the red button", you don't have the power to stop a nuclear weapon from being used against you. Similarly, you "controlling" an AI that can design a bioweapon doesn't give you the power to prevent a bioweapon being used against you.
6) Another way to put the above might be that even if you control a piece of technology, you might not be able to control the systems, incentives, and structures that the technology creates (which I will refer to hereafter as the logic of the technology). Even if you control the bomb, you are subject to logic of the bomb. If it were easier to make stealthy ICBMs and easier to detect submarines, the logic of the bomb might force preemptive strike and the likely eradication of much of humanity. The makers of the bomb didn't know that this would not be the case. As we currently work to build AI, it is very difficult to predict what it will better and worse at, and how those dynamics will impact the future of humanity. Even if AI is not itself "in control", there is a substantial chance that the logic of AI will be.
I guess I'm an "AI doomer and safetyist" but I assure you AGI is coming soon. I give it 50% chance by 2027. Your article is unconvincing. If you'd like to know more about what I think on this topic, go to takeoffspeeds.com and play around with the model and read the associated report and the previous Bio Anchors report. My median setting for 2022 training requirements variable is 1e29. Also see https://bounded-regret.ghost.io/what-will-gpt-2030-look-like/ Happy to throw more links around if anyone is interested in more stuff to read on the topic of AGI timelines.
I think your report is based on the wrong assumption that you can predict the invention of novel technologies. Did you even read my linked reply to Gwern et al.? Quoting myself here:
Expert opinion won't get us far as the growth of knowledge is unpredictable. Stone Age people couldn’t have predicted the invention of the wheel since its prediction necessitates its invention. We need a hard-to-vary explanation to understand a system or phenomenon.
[...]
The Scaling Laws hypothesize that LLMs will continue to improve with increasing model size, training data, and compute.
Some claim that at the end of these scaling laws lies AGI. This is wrong. It is like saying that if we make cars faster, we’ll get supersonic jets. The error is to assume that the deep learning transformer architecture will somehow magically evolve into AGI (gain disobedience, agency, and creativity). Professor Noam Chomsky also called this thinking emergentist10. Evolution and engineering are not the same.
In his book, The Myth of Artificial Intelligence, AI researcher Eric J. Larson shares similar views, criticizing multiple failed "big data neuroscience" projects writing that “technology is downstream of theory”. AGIs (People) create new knowledge at runtime, while current LLM/transformer-based models can only improve if we give them more training data.
Gwern’s “Scaling hypothesis AGI” is based on the claim that GPT-3 has somewhere around twice the “absolute error of a human”. He calculates that we’ll reach “human-level performance” once we train a model 2,200,000 X the size of GPT-311. He assumes that training AGI becomes a simple $10 trillion investment (in 2038 if declining compute trends continue).
The mistake here is assuming that human-level performance on an isolated writing task is a meaningful measurement of human intelligence. Just because an AI model performs on par with humans in one specific task doesn't mean it has the same general intelligence exhibited by humans. A calculator is better than a human at calculating, but it doesn’t replace the work of a mathematician conjecturing new theorems.
In what way does the report rely on the assumption that I can predict the invention of novel technologies, in a stronger or more problematic sense than anyone else's calculations for AI timelines do? Or self-driving car timelines, or moonbase timelines, for that matter. Obviously predicting the future is hard, but there are better and worse ways to do it. The report generates a probability distribution, i.e. it's all about quantifying and managing uncertainty.
I agree that expert opinion sucks. I've been saying this back when expert opinion was that AGI was far away, and only a few people like me thought it was on the horizon. Now that expert opinion agrees with me, I still try to avoid invoking it for the most part, and instead link to actual arguments and data.
Yeah I read your reply, I said I found it unconvincing. Here are some thoughts:
--I agree that making cars faster doesn't result in supersonic jets. However, I think that a sufficiently large multimodal AutoGPT trained sufficiently long on sufficiently diverse ambitious tasks in the real world... would probably be AGI, capable of dramatically accelerating AI R&D for example. I do not just assume this but instead argue for it on the basis of the scaling laws and trends on capability metrics. For examples of the sort of arguments I like, see Steinhardt's post on GPT-2030 linked above. I'd be curious to hear more about what you think the barrier is -- what skills will GPT-2030 lack, that prevent it from being AGI or in particular automating AI R&D? You mention disobedience, agency, and creativity. Isn't ChatGPT already disobedient and creative? Isn't AutoGPT, ChaosGPT, etc. already agentic?
--I definitely don't assume that human-level performance on an isolated writing task is a meaningful measurement of human intelligence, or that an AI that performs well on one specific task must be AGI. When GPT-2 and GPT-3 came out back in the day, I updated towards shorter AGI timelines precisely because they seemed to have 'sparks of general intelligence,' i.e. they seemed to be good at lots of things, not just one thing.
Besides relying on expert opinion (justificationism) you don't seem to appreciate the importance of hard-to-vary explanations in your predictions which makes them error-prone:
1. We can't create new knowledge via induction (Bayesian epistemology). Priors lead to an infinite regress; How confident are you in your 90% prior. Without an explanation all of these numbers are arbitrary.
2. More evidence doesn't simply increase the likelihood. Europeans only discovered black swans in 1636.
Your argument that we can scale up AutoGPT is interesting and goes into the right direction because it's trying to _explain_ how an AGI could work.
However, AutoGPT not a model, is just an application to recursively break down prompts which are generated by an LLMs. Its induction-based architecture will also remain unable to create new knowledge (new knowledge cant be created via induction). So I don't see how something like this could perform AI research on its own. It can certainly augment and automate certain AI researcher tasks, but it can't replace them.
It seems as though you didn't actually grasp Marc's point about "category error". AI is a buzzword for computer programs that run statistical models. They're great, they can help us achieve great things, but granting that they're on some path to far outpace humans in "general intelligence", let alone develop a mind of their own or get "out of our control" in some way is baseless. Just because you can *imagine* a hypothetical computer program that does such a thing doesn't mean that we're anywhere close, or even that such a program is physically possible to produce.
Regulating "AI" on the basis of your science fiction imaginings makes about as much sense as regulating the laser industry on the basis that somebody might build a laser that eats through the earth's core, or regulating the airline industry to make sure they don't destroy us in the wake of a warp drive malfunction.
The Law ought to be rooted in firm objective evidence and arguments, not fictional what ifs. Nuclear bombs are physically possible and do exist. It's trivial to demonstrate that viruses can replicate out of our control and damage human health. Comparing those verifiable facts to science fiction imaginings about statistical models is sloppy reasoning by analogy and should play no part in deciding where to point the government gun.
> or even that such a program is physically possible to produce.
Of course it's physically possible to make smarter than human general intelligence. How are we even entertaining this question at this stage.
> makes about as much sense as regulating the laser industry on the basis that somebody might build a laser that eats through the earth's core
You definitely would talk about regulating this if most major experts in the field were saying that this is a distinct and probable risk (akin to pandemics and nuclear weapons).
> It's trivial to demonstrate that viruses can replicate out of our control and damage human health.
What's your threshold for damage? How much damage would an AI have to do for you to accept that it should be regulated? How involved does the AI have to be in the process? If an AI provides detailed and easy to understand bioweapon instructions to terrorists, would that count as AI-induced harm, or is that just terrorist=bad as usual?
> Of course it's physically possible to make smarter than human general intelligence. How are we even entertaining this question at this stage.
Asserting something isn’t an argument and can be dismissed without argument.
> You definitely would talk about regulating this if most major experts in the field were saying that this is a distinct and probable risk (akin to pandemics and nuclear weapons).
Textbook appeal to authority. What about all of the experts who disagree? What about the financial interests of the pro-regulation experts in cementing their lead via regulation. “A bunch of experts say so” isn’t a valid argument.
> What's your threshold for damage? How much damage would an AI have to do for you to accept that it should be regulated? How involved does the AI have to be in the process? If an AI provides detailed and easy to understand bioweapon instructions to terrorists, would that count as AI-induced harm, or is that just terrorist=bad as usual?
None of these questions are relevant to my point. My point is that nukes and viruses can demonstrably, directly result in exponential physical processes that can put life and property in jeopardy. Hence we all have a legitimate interest in their handling, just like you have a legitimate interest if your next door neighbor is stockpiling dynamite in their garage.
That’s not comparable to a chatbot which can tell you how to make dynamite. We already have Google, which can tell you how to make dynamite (or bioweapons, or nukes). Publishing such information is firmly protected by the first amendment, as is coming up with new chemical formulas for such things or any other hypothetical AI chatbot activity.
I think this is a good breakdown. Marc's optimism strikes me as exactly the same kind of utopian naivete that drove adoption of the internet, thinking it would lead to a new information age where everyone will be more knowledgeable and better informed because they have the world's information at their fingertips. Who still thinks that today? Utopian naivete is great for driving progress, but it blinds you to dangers.
> We’re talking about something that is potentially more powerful than any human
"Powerful" is too vague and will be misinterpreted. Substitute "intelligent" as that's more concrete.
> But a lot of the harm from China developing AI first comes from the fact they probably will give no consideration to alignment issues.
This is completely backwards. China has *way more* motivation to work on and solve alignment, because they want their AI to conform to and enforce their political agenda, and they will not tolerate anything less. That's alignment.
By contrast, loosely regulated capitalist countries have little to no incentive to work on alignment, they are the ones that incentivize the first to cross the AGI line with little thought to ethics.
Don't worry that China won't work on alignment, worry that they will, and that they will solve it and AGI first.
"The Russian nuclear scientists who built the Chernobyl Nuclear Power Plant did not want it to meltdown, the biologists at the Wuhan Institute of Virology didn’t want to release a deadly pandemic"?
The Chernobyl meltdown was the most egregious case of (Ukrainian) operator error known to man until of course, it began massacring its Russian speaking citizens.
The Wuhan Institute was built by France and was never without French and/or EU and American technologists. None of them, and none of the lab's many foreign visitors ever saw or heard of anything that might support such an allegation. Besides, the CDC certified the world's first Covid death before China's first, and admitted that 4-6 million Americans were Covid seropositive in 2019. https://herecomeschina.substack.com/p/covid-came-from-italy
He doesn't engage directly and provide arguments because he thinks the AI safety people are like a religious cult. You can't argue with a religious cult, and we have freedom of religion so you can't ban them. You can only state publicly that you think they are a cult, encourage people to avoid the cultists, and hope that people slowly drift away from the cult over time.
If you successfully argue that everyone who believes AI is going to kill us is just a cult member, you still need to argue why AI will or won't actually kill us using locally-valid arguments.
Andreesen is a deeply unserious thinker whom we shouldn't bother engaging with on any topic beyond what he actually knows and is very good at, which is investing in tech companies with the measure being capital return on a 3-10 year timeline.
He's a cliche. Smart and ambitious guy who's incredibly good at something that makes him a ton of money who thinks that means he's a world historical genius on everything and his random intuition on every topic is right and anyone who questions him is pathetic or sinister or an idiot. I work technically in AI safety. He's literally laughable.
Lord Acton’s phrase, “Power tends to corrupt, absolute power corrupts absolutely” is hard to ignore in the march to AGI. As many have pointed out, AI is simply a reflection of humanity, which includes all our worst traits. Marc’s goal is absolute power any way you look at it.
Not an AI expert, but let me suggest that the very usefulness of AI places it on a slippery slope. AI is only dangerous if it has agency. Its usefulness will be very seductive. It will acquire more and more agency as it proceeds to do a better, cheaper, less risky job than humans. Will it be sentient? Conscious? Whatever those terms mean, perhaps not, but will I care if an AI drone puts a bullet in me? It will feel nothing, has no regrets, and is not going to “blade runner”. It’s “just code” doing its job some human told it to do, and given the way we escalate things there’s probably no stopping it.
"In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will."
Marc, the LLM's are developed based on a goal - minimizing a loss function. That goal creates emergent goals - optimizing prediction of the next word requires learning grammar, logic and problem solving. Depending on the loss function it is minimizing it could have an emergent goal of killing you.
Marc, do you think we have souls that differentiate us from machines? Is there something in our brains that could never be replicated by a computer? A toaster has a limited capability, but the AI systems that are being developed have no defined upper capability limit that has yet been discovered.
I don't really like ad hominem arguments but in this case I will make an exception. I strongly suspect that that Marc's paper is not a serious intellectual exercise but a shill for his investments, like his enthusiasm for Web 3.0 and crypto.
Brilliant. Your essay counters all Andreessen' arguments in clear, elegant and convincing way. I was thinking along the same lines as yiu but my command if English wouldn't allow me to write such excellent piece.
One question remains: how come Andreessen, a guy so smart, wrote such a stupid post? I suspect he just used ChatGPT...
Brilliant. Your essay counters all Andreessen' arguments in clear, elegant and convincing way. I was thinking along the same lines as yiu but my command if English wouldn't allow me to write such excellent piece.
One question remains: how come Andreessen, a guy so smart, wrote such a stupid post? I suspect he just used ChatGPT...
I wasn’t expecting this (thats on me); bullet proof counter arguments and factual takedowns. Brilliant essay.
> We’re talking about something that is potentially more powerful than any human
What makes you claim that? AI doomers and safetyists are wrong assuming that the current LLM/RL paradigm will exponentially accelerate into something: creative, autonomous, disobedient and open-ended. The "AGI via Scaling Laws" claim is flawed for similar reasons (see my critique here: https://scalingknowledge.substack.com/i/124877999/scaling-laws).
It seems to be what Marc thinks will happen, given all the things it will be able to do.
Starting from his premise, it makes sense.
What exactly makes you think that? He writes that it will "augment human intelligence" (listing many examples of augmentation), not that we'll have a superhuman intelligence.
Some thoughts:
1) Arguing about whether AI will be in human control can be confusing because the idea of control is a philosophical quagmire. Personally, I think it is easier to think in terms of whether AI could have substantial negative unintended/unforeseen consequences than whether AI could be outside of human control.
2) We already have superhuman AI, it is just superhuman in narrow domains, such as Go and protein folding.
3) AI doesn't have to cause singularity to have substantial negative unintended/unforeseen consequences. For example:
a) AlphaFold could be used to make bioweapons that cause great harm, even though I don't believe it was the intention of the DeepMind researchers to create bioweapons.
b) AI trading algorithms could inadvertently distort capital allocation in ways that substantially negatively impact the economy, without the makers of the trading algorithms intending that.
c) AI could make it easy to mass manufacture cheap autonomous weapons systems that cause mass casualties or are used by tyrants to consolidate power over their population.
4) I don't think that alignment necessarily helps with any of the above three examples, but I do think they demonstrate examples in which the AI is substantially more powerful than any *individual* human.
5) The above examples are perhaps somewhat similar to the atomic bomb: the atomic bomb is *in some ways* controlled by humans, but is also arguably more powerful than any human. Even if you are a human who has access to "the red button", you don't have the power to stop a nuclear weapon from being used against you. Similarly, you "controlling" an AI that can design a bioweapon doesn't give you the power to prevent a bioweapon being used against you.
6) Another way to put the above might be that even if you control a piece of technology, you might not be able to control the systems, incentives, and structures that the technology creates (which I will refer to hereafter as the logic of the technology). Even if you control the bomb, you are subject to logic of the bomb. If it were easier to make stealthy ICBMs and easier to detect submarines, the logic of the bomb might force preemptive strike and the likely eradication of much of humanity. The makers of the bomb didn't know that this would not be the case. As we currently work to build AI, it is very difficult to predict what it will better and worse at, and how those dynamics will impact the future of humanity. Even if AI is not itself "in control", there is a substantial chance that the logic of AI will be.
I guess I'm an "AI doomer and safetyist" but I assure you AGI is coming soon. I give it 50% chance by 2027. Your article is unconvincing. If you'd like to know more about what I think on this topic, go to takeoffspeeds.com and play around with the model and read the associated report and the previous Bio Anchors report. My median setting for 2022 training requirements variable is 1e29. Also see https://bounded-regret.ghost.io/what-will-gpt-2030-look-like/ Happy to throw more links around if anyone is interested in more stuff to read on the topic of AGI timelines.
I think your report is based on the wrong assumption that you can predict the invention of novel technologies. Did you even read my linked reply to Gwern et al.? Quoting myself here:
Expert opinion won't get us far as the growth of knowledge is unpredictable. Stone Age people couldn’t have predicted the invention of the wheel since its prediction necessitates its invention. We need a hard-to-vary explanation to understand a system or phenomenon.
[...]
The Scaling Laws hypothesize that LLMs will continue to improve with increasing model size, training data, and compute.
Some claim that at the end of these scaling laws lies AGI. This is wrong. It is like saying that if we make cars faster, we’ll get supersonic jets. The error is to assume that the deep learning transformer architecture will somehow magically evolve into AGI (gain disobedience, agency, and creativity). Professor Noam Chomsky also called this thinking emergentist10. Evolution and engineering are not the same.
In his book, The Myth of Artificial Intelligence, AI researcher Eric J. Larson shares similar views, criticizing multiple failed "big data neuroscience" projects writing that “technology is downstream of theory”. AGIs (People) create new knowledge at runtime, while current LLM/transformer-based models can only improve if we give them more training data.
Gwern’s “Scaling hypothesis AGI” is based on the claim that GPT-3 has somewhere around twice the “absolute error of a human”. He calculates that we’ll reach “human-level performance” once we train a model 2,200,000 X the size of GPT-311. He assumes that training AGI becomes a simple $10 trillion investment (in 2038 if declining compute trends continue).
The mistake here is assuming that human-level performance on an isolated writing task is a meaningful measurement of human intelligence. Just because an AI model performs on par with humans in one specific task doesn't mean it has the same general intelligence exhibited by humans. A calculator is better than a human at calculating, but it doesn’t replace the work of a mathematician conjecturing new theorems.
In what way does the report rely on the assumption that I can predict the invention of novel technologies, in a stronger or more problematic sense than anyone else's calculations for AI timelines do? Or self-driving car timelines, or moonbase timelines, for that matter. Obviously predicting the future is hard, but there are better and worse ways to do it. The report generates a probability distribution, i.e. it's all about quantifying and managing uncertainty.
I agree that expert opinion sucks. I've been saying this back when expert opinion was that AGI was far away, and only a few people like me thought it was on the horizon. Now that expert opinion agrees with me, I still try to avoid invoking it for the most part, and instead link to actual arguments and data.
Yeah I read your reply, I said I found it unconvincing. Here are some thoughts:
--I agree that making cars faster doesn't result in supersonic jets. However, I think that a sufficiently large multimodal AutoGPT trained sufficiently long on sufficiently diverse ambitious tasks in the real world... would probably be AGI, capable of dramatically accelerating AI R&D for example. I do not just assume this but instead argue for it on the basis of the scaling laws and trends on capability metrics. For examples of the sort of arguments I like, see Steinhardt's post on GPT-2030 linked above. I'd be curious to hear more about what you think the barrier is -- what skills will GPT-2030 lack, that prevent it from being AGI or in particular automating AI R&D? You mention disobedience, agency, and creativity. Isn't ChatGPT already disobedient and creative? Isn't AutoGPT, ChaosGPT, etc. already agentic?
--I definitely don't assume that human-level performance on an isolated writing task is a meaningful measurement of human intelligence, or that an AI that performs well on one specific task must be AGI. When GPT-2 and GPT-3 came out back in the day, I updated towards shorter AGI timelines precisely because they seemed to have 'sparks of general intelligence,' i.e. they seemed to be good at lots of things, not just one thing.
Besides relying on expert opinion (justificationism) you don't seem to appreciate the importance of hard-to-vary explanations in your predictions which makes them error-prone:
1. We can't create new knowledge via induction (Bayesian epistemology). Priors lead to an infinite regress; How confident are you in your 90% prior. Without an explanation all of these numbers are arbitrary.
2. More evidence doesn't simply increase the likelihood. Europeans only discovered black swans in 1636.
Your argument that we can scale up AutoGPT is interesting and goes into the right direction because it's trying to _explain_ how an AGI could work.
However, AutoGPT not a model, is just an application to recursively break down prompts which are generated by an LLMs. Its induction-based architecture will also remain unable to create new knowledge (new knowledge cant be created via induction). So I don't see how something like this could perform AI research on its own. It can certainly augment and automate certain AI researcher tasks, but it can't replace them.
It seems as though you didn't actually grasp Marc's point about "category error". AI is a buzzword for computer programs that run statistical models. They're great, they can help us achieve great things, but granting that they're on some path to far outpace humans in "general intelligence", let alone develop a mind of their own or get "out of our control" in some way is baseless. Just because you can *imagine* a hypothetical computer program that does such a thing doesn't mean that we're anywhere close, or even that such a program is physically possible to produce.
Regulating "AI" on the basis of your science fiction imaginings makes about as much sense as regulating the laser industry on the basis that somebody might build a laser that eats through the earth's core, or regulating the airline industry to make sure they don't destroy us in the wake of a warp drive malfunction.
The Law ought to be rooted in firm objective evidence and arguments, not fictional what ifs. Nuclear bombs are physically possible and do exist. It's trivial to demonstrate that viruses can replicate out of our control and damage human health. Comparing those verifiable facts to science fiction imaginings about statistical models is sloppy reasoning by analogy and should play no part in deciding where to point the government gun.
> or even that such a program is physically possible to produce.
Of course it's physically possible to make smarter than human general intelligence. How are we even entertaining this question at this stage.
> makes about as much sense as regulating the laser industry on the basis that somebody might build a laser that eats through the earth's core
You definitely would talk about regulating this if most major experts in the field were saying that this is a distinct and probable risk (akin to pandemics and nuclear weapons).
> It's trivial to demonstrate that viruses can replicate out of our control and damage human health.
What's your threshold for damage? How much damage would an AI have to do for you to accept that it should be regulated? How involved does the AI have to be in the process? If an AI provides detailed and easy to understand bioweapon instructions to terrorists, would that count as AI-induced harm, or is that just terrorist=bad as usual?
> Of course it's physically possible to make smarter than human general intelligence. How are we even entertaining this question at this stage.
Asserting something isn’t an argument and can be dismissed without argument.
> You definitely would talk about regulating this if most major experts in the field were saying that this is a distinct and probable risk (akin to pandemics and nuclear weapons).
Textbook appeal to authority. What about all of the experts who disagree? What about the financial interests of the pro-regulation experts in cementing their lead via regulation. “A bunch of experts say so” isn’t a valid argument.
> What's your threshold for damage? How much damage would an AI have to do for you to accept that it should be regulated? How involved does the AI have to be in the process? If an AI provides detailed and easy to understand bioweapon instructions to terrorists, would that count as AI-induced harm, or is that just terrorist=bad as usual?
None of these questions are relevant to my point. My point is that nukes and viruses can demonstrably, directly result in exponential physical processes that can put life and property in jeopardy. Hence we all have a legitimate interest in their handling, just like you have a legitimate interest if your next door neighbor is stockpiling dynamite in their garage.
That’s not comparable to a chatbot which can tell you how to make dynamite. We already have Google, which can tell you how to make dynamite (or bioweapons, or nukes). Publishing such information is firmly protected by the first amendment, as is coming up with new chemical formulas for such things or any other hypothetical AI chatbot activity.
I think this is a good breakdown. Marc's optimism strikes me as exactly the same kind of utopian naivete that drove adoption of the internet, thinking it would lead to a new information age where everyone will be more knowledgeable and better informed because they have the world's information at their fingertips. Who still thinks that today? Utopian naivete is great for driving progress, but it blinds you to dangers.
> We’re talking about something that is potentially more powerful than any human
"Powerful" is too vague and will be misinterpreted. Substitute "intelligent" as that's more concrete.
> But a lot of the harm from China developing AI first comes from the fact they probably will give no consideration to alignment issues.
This is completely backwards. China has *way more* motivation to work on and solve alignment, because they want their AI to conform to and enforce their political agenda, and they will not tolerate anything less. That's alignment.
By contrast, loosely regulated capitalist countries have little to no incentive to work on alignment, they are the ones that incentivize the first to cross the AGI line with little thought to ethics.
Don't worry that China won't work on alignment, worry that they will, and that they will solve it and AGI first.
"The Russian nuclear scientists who built the Chernobyl Nuclear Power Plant did not want it to meltdown, the biologists at the Wuhan Institute of Virology didn’t want to release a deadly pandemic"?
The Chernobyl meltdown was the most egregious case of (Ukrainian) operator error known to man until of course, it began massacring its Russian speaking citizens.
The Wuhan Institute was built by France and was never without French and/or EU and American technologists. None of them, and none of the lab's many foreign visitors ever saw or heard of anything that might support such an allegation. Besides, the CDC certified the world's first Covid death before China's first, and admitted that 4-6 million Americans were Covid seropositive in 2019. https://herecomeschina.substack.com/p/covid-came-from-italy
Great compilation, Dwarkesh!
I also noted several fallacies and manipulatons in Marc's essay.
You commented on the most salient ones (SBF, China), I'd also mention a manipulation about Marx.
Marc argues that the premise of 'owners of the means of production stealing all societal wealth from the people who do the actual work' is a fallacy.
Yet data from OECD and other sourced about Great Decoupling show that it is (to a certain extent) a fact, not a fallacy.
https://en.wikipedia.org/wiki/Decoupling_of_wages_from_productivity
Well said. I've subscribed & am looking forward to working through some of your podcast interviews!
You seem to be mistaking Marc's goal with his piece. You're playing rational checkers.
Excellent points. Sad to see Marc was so quick to block. Keep up the good work!
He doesn't engage directly and provide arguments because he thinks the AI safety people are like a religious cult. You can't argue with a religious cult, and we have freedom of religion so you can't ban them. You can only state publicly that you think they are a cult, encourage people to avoid the cultists, and hope that people slowly drift away from the cult over time.
If you successfully argue that everyone who believes AI is going to kill us is just a cult member, you still need to argue why AI will or won't actually kill us using locally-valid arguments.
That's the only way to not eventually get killed.
Andreesen is a deeply unserious thinker whom we shouldn't bother engaging with on any topic beyond what he actually knows and is very good at, which is investing in tech companies with the measure being capital return on a 3-10 year timeline.
He's a cliche. Smart and ambitious guy who's incredibly good at something that makes him a ton of money who thinks that means he's a world historical genius on everything and his random intuition on every topic is right and anyone who questions him is pathetic or sinister or an idiot. I work technically in AI safety. He's literally laughable.
Lord Acton’s phrase, “Power tends to corrupt, absolute power corrupts absolutely” is hard to ignore in the march to AGI. As many have pointed out, AI is simply a reflection of humanity, which includes all our worst traits. Marc’s goal is absolute power any way you look at it.
Not an AI expert, but let me suggest that the very usefulness of AI places it on a slippery slope. AI is only dangerous if it has agency. Its usefulness will be very seductive. It will acquire more and more agency as it proceeds to do a better, cheaper, less risky job than humans. Will it be sentient? Conscious? Whatever those terms mean, perhaps not, but will I care if an AI drone puts a bullet in me? It will feel nothing, has no regrets, and is not going to “blade runner”. It’s “just code” doing its job some human told it to do, and given the way we escalate things there’s probably no stopping it.
"In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will."
Marc, the LLM's are developed based on a goal - minimizing a loss function. That goal creates emergent goals - optimizing prediction of the next word requires learning grammar, logic and problem solving. Depending on the loss function it is minimizing it could have an emergent goal of killing you.
Marc, do you think we have souls that differentiate us from machines? Is there something in our brains that could never be replicated by a computer? A toaster has a limited capability, but the AI systems that are being developed have no defined upper capability limit that has yet been discovered.
I don't really like ad hominem arguments but in this case I will make an exception. I strongly suspect that that Marc's paper is not a serious intellectual exercise but a shill for his investments, like his enthusiasm for Web 3.0 and crypto.
Brilliant. Your essay counters all Andreessen' arguments in clear, elegant and convincing way. I was thinking along the same lines as yiu but my command if English wouldn't allow me to write such excellent piece.
One question remains: how come Andreessen, a guy so smart, wrote such a stupid post? I suspect he just used ChatGPT...
Brilliant. Your essay counters all Andreessen' arguments in clear, elegant and convincing way. I was thinking along the same lines as yiu but my command if English wouldn't allow me to write such excellent piece.
One question remains: how come Andreessen, a guy so smart, wrote such a stupid post? I suspect he just used ChatGPT...