I think I mainly agree with the other comments (from Devin Kalish and Miranda Zhang), but on net I’m still glad that this post exists. Many thanks for writing it :)
Specifically, I think positive/x-hope (as opposed to negative/x-risk) styled framings can be valuable. Because:
There’s some literature in behavioral psychology and in risk studies which says, roughly, that a significant fraction of people will kind of shut down and ignore the message when told about a threat. See, e.g., normalcy bias and ostrich effect.
Mental contrasting has been shown to work (and the study replicates, as far as I can tell). Essentially, the research behind mental contrasting tells us that people tend to be strongly motivated by working toward a desired future.[1]
Anecdotally, I’ve found that some people do just resonate strongly with “inspirational visions” and the idea of helping build something really cool.
Technically, mental constrasting only works if the perceived chance of success is high. I’m not sure what percentage “high” in this context maps to, but there is an issue here for folks who have a non-low P(doom) via misaligned AI… To counter this, there is the option of resorting to the Dark Arts, self-deception in particular, but that discussion is beyond the scope of this comment. See the Dark Arts tag on LessWrong if interested.
I think I mainly agree with the other comments (from Devin Kalish and Miranda Zhang), but on net I’m still glad that this post exists. Many thanks for writing it :)
Specifically, I think positive/x-hope (as opposed to negative/x-risk) styled framings can be valuable. Because:
There’s some literature in behavioral psychology and in risk studies which says, roughly, that a significant fraction of people will kind of shut down and ignore the message when told about a threat. See, e.g., normalcy bias and ostrich effect.
Mental contrasting has been shown to work (and the study replicates, as far as I can tell). Essentially, the research behind mental contrasting tells us that people tend to be strongly motivated by working toward a desired future.[1]
Anecdotally, I’ve found that some people do just resonate strongly with “inspirational visions” and the idea of helping build something really cool.
Technically, mental constrasting only works if the perceived chance of success is high. I’m not sure what percentage “high” in this context maps to, but there is an issue here for folks who have a non-low P(doom) via misaligned AI… To counter this, there is the option of resorting to the Dark Arts, self-deception in particular, but that discussion is beyond the scope of this comment. See the Dark Arts tag on LessWrong if interested.