I was just reflecting on the term ‘global priorities’. I think to me it sounds like it’s asking “what should the world do”, in contrast to “what should I do”. The latter is far mode, the former is near. I think that staying near mode while thinking about improving the world is pretty tough. I think when people fail, they end making recommendations that could only work in-principle if everyone coordinates at the same time, and also as a result shape their speech to focus on signaling to achieve these ends, and often walk off a cliff of abstraction. I think when people stay in near mode, they focus on opportunities that do not require coordination, but opportunities they can personally achieve. I think that EAs caring very much about whether they actually helped someone with their donation has been one of the healthier epistemic things for the community. Though I do not mean to argue it should be held as a sacred value.
For example, I think the question “what should the global priority be on helping developing countries” is naturally answered by talking broadly about the West helping Africa build a thriving economy, talk about political revolution to remove corruption in governments, talk about what sorts of multi-billion dollar efforts could take place like what the Gates Foundation should do. This is a valuable conversation that has been going on for decades/centuries.
I think the question “what can I personally do to help people in Africa” is more naturally answered by providing cost-effectiveness estimates for marginal thousands of dollars to charities like AMF. This is a valuable conversation that I think has has orders of magnitude less effort put into it outside the EA community. It’s a standard idea in economics that you can reliably get incredibly high returns on small marginal investments, and I think it is these kind of investments that the EA community has been much more successful at finding, and has managed to exploit to great effect.
“global priorities (GP)” community is… more appropriate than “effective altruism (EA)” community… More appropriate (or descriptive) because it better focuses on large-scale change, rather than individual action
Anyway, I was surprised to read you say that, in direct contrast to what I was thinking, and I think how I have often thought of Effective Altruism.
I was just reflecting on the term ‘global priorities’. I think to me it sounds like it’s asking “what should the world do”, in contrast to “what should I do”. The latter is far mode, the former is near. I think that staying near mode while thinking about improving the world is pretty tough. I think when people fail, they end making recommendations that could only work in-principle if everyone coordinates at the same time, and also as a result shape their speech to focus on signaling to achieve these ends, and often walk off a cliff of abstraction. I think when people stay in near mode, they focus on opportunities that do not require coordination, but opportunities they can personally achieve. I think that EAs caring very much about whether they actually helped someone with their donation has been one of the healthier epistemic things for the community. Though I do not mean to argue it should be held as a sacred value.
For example, I think the question “what should the global priority be on helping developing countries” is naturally answered by talking broadly about the West helping Africa build a thriving economy, talk about political revolution to remove corruption in governments, talk about what sorts of multi-billion dollar efforts could take place like what the Gates Foundation should do. This is a valuable conversation that has been going on for decades/centuries.
I think the question “what can I personally do to help people in Africa” is more naturally answered by providing cost-effectiveness estimates for marginal thousands of dollars to charities like AMF. This is a valuable conversation that I think has has orders of magnitude less effort put into it outside the EA community. It’s a standard idea in economics that you can reliably get incredibly high returns on small marginal investments, and I think it is these kind of investments that the EA community has been much more successful at finding, and has managed to exploit to great effect.
Anyway, I was surprised to read you say that, in direct contrast to what I was thinking, and I think how I have often thought of Effective Altruism.