Listen now | Paul Christiano is the world’s leading AI safety researcher. My full episode with him is out! We discuss: Does he regret inventing RLHF, and is alignment necessarily dual-use? Why he has relatively modest timelines (40% by 2040, 15% by 2030), What do we want post-AGI world to look like (do we want to keep gods enslaved forever)?
Probably no comments on this because it’s so vapid. Level of speculation also is pointless, as we have zero belief that anything said here will be relevant in even a year
Fascinating podcast episode, especially for someone like me who is not in favor of Artificial Intelligence systems being promoted everywhere. I was proud to be called a Luddite on another website!
Probably no comments on this because it’s so vapid. Level of speculation also is pointless, as we have zero belief that anything said here will be relevant in even a year
Fascinating podcast episode, especially for someone like me who is not in favor of Artificial Intelligence systems being promoted everywhere. I was proud to be called a Luddite on another website!
Incredible guest. Excited to listen to this one.