A key implication here is that we need models of how AI will transform the world with many qualitative and quantitative details. Individual EAs working in global health, for example, cannot be expected to broadly predict how the world will change.
My view, having thought about this a fair bit, is that there is an extremely broad range of outcomes ranging from human extinction, to various dystopias, to utopia or “utopia”. But there are probably a lot of effects that are relatively predictable, especially in the near term.
Of course, EAs in field X can think about how AI affects X. But this should work better after learning about whatever broad changes superforecasters (or whoever) can predict.
A key implication here is that we need models of how AI will transform the world with many qualitative and quantitative details. Individual EAs working in global health, for example, cannot be expected to broadly predict how the world will change.
My view, having thought about this a fair bit, is that there is an extremely broad range of outcomes ranging from human extinction, to various dystopias, to utopia or “utopia”. But there are probably a lot of effects that are relatively predictable, especially in the near term.
Of course, EAs in field X can think about how AI affects X. But this should work better after learning about whatever broad changes superforecasters (or whoever) can predict.