I think it’s healthy to be happy about being in disagreement with other EAs about something. Either that means you can outperform them, or it means you’re misunderstanding something. But if you believed the same thing, then you for sure aren’t outperforming them. : )
I think the future depends to a large extent on what people in control of extremely powerfwl AI ends up doing with it, conditional on humanity surviving the transition to that era. We should probably speculate on what we would want those people to do, and try to prepare authoritative and legible documents that such people will be motivated to read.
I think it’s healthy to be happy about being in disagreement with other EAs about something. Either that means you can outperform them, or it means you’re misunderstanding something. But if you believed the same thing, then you for sure aren’t outperforming them. : )
I think the future depends to a large extent on what people in control of extremely powerfwl AI ends up doing with it, conditional on humanity surviving the transition to that era. We should probably speculate on what we would want those people to do, and try to prepare authoritative and legible documents that such people will be motivated to read.