This is a great study of a very important question.
Look at politics today and you see that fear is a very powerful motivator. I would say that most voters in the US presidential election seem to be motivated most obviously by fear of what would ensue if the other side wins. Each echo-chamber has built up a mental image of a dystopian future which makes them almost zealots, unwilling to listen to reason, unable even to think calmly.
A big challenge that we face however—with climate change and with AI-risk—is that the risks we speak of are quite abstract. Fears that really work on our emotions tend to be very tangible things—snakes, spiders, knives.
Intellectually I know that the risk of a superintelligent AI taking over the world is a lot scarier than a snake, but I know which one would immediately drive me to act to eliminate the risk.
IMHO there is a massive opportunity for someone to research the best way to provoke a more emotional, tangible fear of AI risk (and of climate risk). This project you’ve done shows that this is important, that endless optimism is not the best way forward. We don’t want people paralysed by fear, but right now we have the opposite, we have everyone moving forward blindly, just assuming that the risk will go away.
This is a great study of a very important question.
Look at politics today and you see that fear is a very powerful motivator. I would say that most voters in the US presidential election seem to be motivated most obviously by fear of what would ensue if the other side wins. Each echo-chamber has built up a mental image of a dystopian future which makes them almost zealots, unwilling to listen to reason, unable even to think calmly.
A big challenge that we face however—with climate change and with AI-risk—is that the risks we speak of are quite abstract. Fears that really work on our emotions tend to be very tangible things—snakes, spiders, knives.
Intellectually I know that the risk of a superintelligent AI taking over the world is a lot scarier than a snake, but I know which one would immediately drive me to act to eliminate the risk.
IMHO there is a massive opportunity for someone to research the best way to provoke a more emotional, tangible fear of AI risk (and of climate risk). This project you’ve done shows that this is important, that endless optimism is not the best way forward. We don’t want people paralysed by fear, but right now we have the opposite, we have everyone moving forward blindly, just assuming that the risk will go away.