I personally believe that many, if not most, of the world’s most pressing problems are political problems, at least in part.
I agree! But if this is true, doesn’t it seem very problematic if a movement that means to do the most good does not have tools for assessing political problems? I think you may be right that we are not great at that at the moment, but it seems… unambitious to just accept that?
I also think that many people in EA do work with political questions, and my guess would be that some do it very well—but that most of those do it in a full-time capacity that is something different from “citizen politics”. Could it be than rather than EA being poorly suited to assessing political issues, EA does not (yet) have great tools for assessing part-time activism, which would be a much more narrow claim?
Great discussion! I think perhaps there is some subtle conflict between EA’s goal of a “radically better world” and marginal cost effectiveness. For marginal cost effectiveness, I think EA does a good job and the ITN framework is helpful. However, if we want, as CEA states, to contribute to solve ”...a range of pressing global problems — like global poverty, factory farming, and existential risk”, I think we need to get much more politically involved. I actually think this has happened in EA already and I have sensed a big shift with the focus on AI where the focus on politics have become almost dominating. In short: I do not think you can incrementally get to a radically better world by only chipping away at the margin. That is not how I understand that many important changes came about in the past, whether democracies, women’s voting rights, civil rights, etc. I do see radical changes having come about in e.g. medical science via incremental improvements, but if we removed all improvements historically that came about through less incremental changes, I think we would live in a significantly worse world.
Interesting perspective!
I agree! But if this is true, doesn’t it seem very problematic if a movement that means to do the most good does not have tools for assessing political problems? I think you may be right that we are not great at that at the moment, but it seems… unambitious to just accept that?
I also think that many people in EA do work with political questions, and my guess would be that some do it very well—but that most of those do it in a full-time capacity that is something different from “citizen politics”. Could it be than rather than EA being poorly suited to assessing political issues, EA does not (yet) have great tools for assessing part-time activism, which would be a much more narrow claim?
Great discussion! I think perhaps there is some subtle conflict between EA’s goal of a “radically better world” and marginal cost effectiveness. For marginal cost effectiveness, I think EA does a good job and the ITN framework is helpful. However, if we want, as CEA states, to contribute to solve ”...a range of pressing global problems — like global poverty, factory farming, and existential risk”, I think we need to get much more politically involved. I actually think this has happened in EA already and I have sensed a big shift with the focus on AI where the focus on politics have become almost dominating. In short: I do not think you can incrementally get to a radically better world by only chipping away at the margin. That is not how I understand that many important changes came about in the past, whether democracies, women’s voting rights, civil rights, etc. I do see radical changes having come about in e.g. medical science via incremental improvements, but if we removed all improvements historically that came about through less incremental changes, I think we would live in a significantly worse world.