[...]I noticed that most of my belief in AI risk was caused by biased thinking: self-aggrandizing motivated reasoning, misleading language, and anchoring on unjustified probability estimates.
Thank you so much for your reflection and honesty on this. Although I think concerns about the safe development of AI are very legitimate, I have long been concerned that the speculative, sci-fi nature of AI x-risks gives cover to a lot of bias. More cynically, I think grasping AI risk and thinking about it from a longtermist perspective is a great way to show off how smart and abstract you are while (theoretically) also having the most moral impact possible.
I just think identifying with x-risk and hyperastronomical estimates of utility/disutility is meeting a suspicious number of emotional and intellectual needs. If we could see the impact of our actions to mitigate AI risk today, motivated reasoning might not be such a problem. But longtermist issues are those where we really can’t afford self-serving biases, because it won’t necessarily show. I’m really glad to see someone speaking up about this, particularly from their own experience.
If people are biased towards believing their actions have cosmic significance, does this also imply that people without math & CS skills will be biased against AI safety as a cause area?
Not necessarily, because people can believe that multiple kinds of work are significant. I will never be in the military, but I believe there are Generals out there whose decisions are life-and-death for a lot of people. I could presumably believe the same about AI safety.
Thank you so much for your reflection and honesty on this. Although I think concerns about the safe development of AI are very legitimate, I have long been concerned that the speculative, sci-fi nature of AI x-risks gives cover to a lot of bias. More cynically, I think grasping AI risk and thinking about it from a longtermist perspective is a great way to show off how smart and abstract you are while (theoretically) also having the most moral impact possible.
I just think identifying with x-risk and hyperastronomical estimates of utility/disutility is meeting a suspicious number of emotional and intellectual needs. If we could see the impact of our actions to mitigate AI risk today, motivated reasoning might not be such a problem. But longtermist issues are those where we really can’t afford self-serving biases, because it won’t necessarily show. I’m really glad to see someone speaking up about this, particularly from their own experience.
If people are biased towards believing their actions have cosmic significance, does this also imply that people without math & CS skills will be biased against AI safety as a cause area?
Not necessarily, because people can believe that multiple kinds of work are significant. I will never be in the military, but I believe there are Generals out there whose decisions are life-and-death for a lot of people. I could presumably believe the same about AI safety.