I find this also interesting to answer myself, although curious to see Will’s answer.
I think in general casual EA’s have less nuanced views than those who work fulltime thinking about the issues (obviously..). For example, our certainty about the relative importance of AI compared to other x-risks is probably being overplayed in the community. In general, I find ‘casual EA’s’ to have an overly simplistic view of how the world works, while engaging more with those topics brings to surface the complexity of issues. In a complex world, precise, quantitative models are more likely to be wrong, and it’s worth pursuing a broader set of actions. I have seen multiple smart, motivated ‘casual EA’s’ basically give up on EA because “they couldn’t see themselves being an AI safety researcher”. (I’d love to see a list like “20 things to do for the long-term future without being an AI safety researcher”)
I think simplification is definitely useful to get a basic grasp of issues and make headway. In fact, this “ignorance of complexity” may actually be a big strength of EA, because people don’t get overwhelmed and demotivated by the daunting amount of complexity, and actually try to tackle issues that most of the world ignores because they’re too big. However, EA’s should expect things to become more complex, more nuanced, and less clear if they would learn more about a topic.
I find this also interesting to answer myself, although curious to see Will’s answer.
I think in general casual EA’s have less nuanced views than those who work fulltime thinking about the issues (obviously..). For example, our certainty about the relative importance of AI compared to other x-risks is probably being overplayed in the community. In general, I find ‘casual EA’s’ to have an overly simplistic view of how the world works, while engaging more with those topics brings to surface the complexity of issues. In a complex world, precise, quantitative models are more likely to be wrong, and it’s worth pursuing a broader set of actions. I have seen multiple smart, motivated ‘casual EA’s’ basically give up on EA because “they couldn’t see themselves being an AI safety researcher”. (I’d love to see a list like “20 things to do for the long-term future without being an AI safety researcher”)
I think simplification is definitely useful to get a basic grasp of issues and make headway. In fact, this “ignorance of complexity” may actually be a big strength of EA, because people don’t get overwhelmed and demotivated by the daunting amount of complexity, and actually try to tackle issues that most of the world ignores because they’re too big. However, EA’s should expect things to become more complex, more nuanced, and less clear if they would learn more about a topic.