Feb 6, 2023·edited Feb 7, 2023Liked by Dwarkesh Patel
One interesting thing to me was the part on extended time frames and contact with reality. From the transcript:
> "...new things don't tend to go through a 20 year incubation phase in business and then come out the other end and be good. What seems to happen is they need milestones, they need points of contact with reality.
> "Every once in a while there will be a company... with a founder who's like — look, 'I'm gonna do the long term thing', and then they go into a tunnel for 10 or 15 years where they're building something and the theory is they're going to come out the other side. These have existed and these do get funded.
> "Generally they never come up with anything. They end up in their own Private Idaho, they end up in their own internal worlds, they don't have contact with reality, they're not ever in the market, they're not working with customers. They just start to become bubbles of their own reality. Contact with the real world is difficult every single time."
To me, that seems to be a big problem with something like Holden Karnofsky's Most Import Century, as well as a lot of the other AI type alignment stuff places like MIRI are doing.
The plan seems to be, "we're just going to go off over here and think really hard about AI for a while". I think it's a tricky problem -- it's way easier to get contact with reality in business when spinning up a new product -- so I don't necessarily blame them, but I would guess it's why the AI alignment industry has (admittedly?) seen not a lot of results so far despite the sheer amount of talent + brainpower involved. It's also why I'd be pessimistic re: results going forward.
Note I think it's also a reason to be more optimistic about the doomsday AI scenario more generally. The fact the doomsday crowd hasn't been able to have much contact with reality so far makes a lot of their worries seem very hypothetical. They might already in their own Private Idaho.
I wish you asked Marc A about his NIMBYism and how he reconciles it with the rest of his proclaimed worldview:
"Subject line: IMMENSELY AGAINST multifamily development!
I am writing this letter to communicate our IMMENSE objection to the creation of multifamily overlay zones in Atherton … Please IMMEDIATELY REMOVE all multifamily overlay zoning projects from the Housing Element which will be submitted to the state in July. They will MASSIVELY decrease our home values, the quality of life of ourselves and our neighbors and IMMENSELY increase the noise pollution and traffic."
One interesting thing to me was the part on extended time frames and contact with reality. From the transcript:
> "...new things don't tend to go through a 20 year incubation phase in business and then come out the other end and be good. What seems to happen is they need milestones, they need points of contact with reality.
> "Every once in a while there will be a company... with a founder who's like — look, 'I'm gonna do the long term thing', and then they go into a tunnel for 10 or 15 years where they're building something and the theory is they're going to come out the other side. These have existed and these do get funded.
> "Generally they never come up with anything. They end up in their own Private Idaho, they end up in their own internal worlds, they don't have contact with reality, they're not ever in the market, they're not working with customers. They just start to become bubbles of their own reality. Contact with the real world is difficult every single time."
To me, that seems to be a big problem with something like Holden Karnofsky's Most Import Century, as well as a lot of the other AI type alignment stuff places like MIRI are doing.
The plan seems to be, "we're just going to go off over here and think really hard about AI for a while". I think it's a tricky problem -- it's way easier to get contact with reality in business when spinning up a new product -- so I don't necessarily blame them, but I would guess it's why the AI alignment industry has (admittedly?) seen not a lot of results so far despite the sheer amount of talent + brainpower involved. It's also why I'd be pessimistic re: results going forward.
Note I think it's also a reason to be more optimistic about the doomsday AI scenario more generally. The fact the doomsday crowd hasn't been able to have much contact with reality so far makes a lot of their worries seem very hypothetical. They might already in their own Private Idaho.
I believe that that smartest entry point for education reinvention is this: https://medium.com/@mnemko/the-teacher-of-2050-2fd43f862750
I wish you asked Marc A about his NIMBYism and how he reconciles it with the rest of his proclaimed worldview:
"Subject line: IMMENSELY AGAINST multifamily development!
I am writing this letter to communicate our IMMENSE objection to the creation of multifamily overlay zones in Atherton … Please IMMEDIATELY REMOVE all multifamily overlay zoning projects from the Housing Element which will be submitted to the state in July. They will MASSIVELY decrease our home values, the quality of life of ourselves and our neighbors and IMMENSELY increase the noise pollution and traffic."