Oh, I don’t think either conclusion is clearly right. I do worry that me being happy makes it too easy for me to neglect important worries about what things are like for others.
But I think I was sloppy in rounding to “maybe AI ending everything wouldn’t be that bad,” partly because the world could well get better than it currently is, and partly because unaligned AI could make things worse.
Oh, I don’t think either conclusion is clearly right. I do worry that me being happy makes it too easy for me to neglect important worries about what things are like for others.
But I think I was sloppy in rounding to “maybe AI ending everything wouldn’t be that bad,” partly because the world could well get better than it currently is, and partly because unaligned AI could make things worse.
That makes sense, thank you!