Five types of people on AI risks:
Wants AGI as soon as possible, ignores safety.
Wants AGI, but primarily cares about alignment.
Doesn’t understand AGI/doesn’t think it’ll happen anytime during her lifetime; thinks about robots that might take people’s jobs.
Understands AGI, but thinks the timelines are long enough not to worry about it right now.
Doesn’t worry about AGI; being locked-in in our choices and “normal accidents” are both more important/risky/scary.
Here’s my attempt to reflect on the topic: https://forum.effectivealtruism.org/posts/PWKWEFJMpHzFC6Qvu/alignment-is-hard-communicating-that-is-harder