I think we should be narrower on what concrete changes we discuss. You’ve mentioned “integration, “embracing and working around”… what does that really mean? Are you suggesting that we spend less money on the effective causes, and more money on the mainstream causes? That would be less effective (obviously) and I don’t see how it’s supported by your arguments here.
If you are referring to career choice (we might go into careers to shift funding around) I don’t know if the large amount of funding on ineffective causes really changes the issue. If I can choose between managing $1M of global health spending or $1M of domestic health spending, there’s no more debate to be had.
If you just mean that EAs should provide helpful statements and guidance to other efforts… this can be valuable, and we do it sometimes. First, we can provide explicit guidance, which gives people better answers but faces the problems of (i) learning about a whole new set of issues and (ii) navigating reputational risks. Some examples of this could be Founders’ Pledge report on climate change, and Candidate Scoring System. As you can see in both cases, it takes a substantial amount of effort to make respectable progress here.
However, we can also think about empowering other people to apply an EA toolkit within their own lanes. The Future Perfect media column in Vox is mostly an example of this, as they are looking at American politics with a mildly more EA point of view than is typical. I can also imagine articles along the lines of “how EA inspired me to think about X” where X is an ineffective cause area. I’m a big fan of spreading the latter kind of message.
Note: I think your argument is easy enough to communicate by merely pointing out the different quantities of funding in different sectors, and trying to model and graph everything in the beginning is unnecessary complexity.
The idea behind the post was not to advocate for spending more money on ineffective causes, at least not in the form of donations.
(Let’s go with global dev as example problem area) I think providing guidance begins to paint the picture of what I’m advocating for. But something like Vox Newsletters aren’t an adequate way to study the effectiveness of global dev. The real issue at hand is what the upside of formal organization around analyzing dev effectiveness could be, i.e. a Center for Election Science for development, or Open Phil announcing a dev Focus Area.
First and foremost, I think that there is a high upside to simply studying what the current impact of the dev sector is. This was the idea behind bringing up the orders of magnitude of difference between EA and dev earmarked capital. It’s not about deciding where a new donation goes. Nor is accurate to frame it as deciding between managing ‘$1mil in domestic versus global health’. The reality is that there is trillions of dollars locked within dev programs that often have tenuous connections to impact. Making these programs just 1% percent more efficient could have massive impact potential relative to the small amount of preexisting capital EA has at play.
The broader point behind addressing these larger capital chunks, and working directly on improving the efficiencies of mainstream problem areas is that the Overton window model of altruism suggests that people will always donate to ‘inefficient’ charities. Instead of turning away from this and forming its own bubble, EA might stand to gain a lot by addressing mainstream behaviors more directly. Shifting the curve to the right instead of building up from scratch might be easier.
I think we should be narrower on what concrete changes we discuss. You’ve mentioned “integration, “embracing and working around”… what does that really mean? Are you suggesting that we spend less money on the effective causes, and more money on the mainstream causes? That would be less effective (obviously) and I don’t see how it’s supported by your arguments here.
If you are referring to career choice (we might go into careers to shift funding around) I don’t know if the large amount of funding on ineffective causes really changes the issue. If I can choose between managing $1M of global health spending or $1M of domestic health spending, there’s no more debate to be had.
If you just mean that EAs should provide helpful statements and guidance to other efforts… this can be valuable, and we do it sometimes. First, we can provide explicit guidance, which gives people better answers but faces the problems of (i) learning about a whole new set of issues and (ii) navigating reputational risks. Some examples of this could be Founders’ Pledge report on climate change, and Candidate Scoring System. As you can see in both cases, it takes a substantial amount of effort to make respectable progress here.
However, we can also think about empowering other people to apply an EA toolkit within their own lanes. The Future Perfect media column in Vox is mostly an example of this, as they are looking at American politics with a mildly more EA point of view than is typical. I can also imagine articles along the lines of “how EA inspired me to think about X” where X is an ineffective cause area. I’m a big fan of spreading the latter kind of message.
Note: I think your argument is easy enough to communicate by merely pointing out the different quantities of funding in different sectors, and trying to model and graph everything in the beginning is unnecessary complexity.
Thanks for your comments, kbog!
The idea behind the post was not to advocate for spending more money on ineffective causes, at least not in the form of donations.
(Let’s go with global dev as example problem area) I think providing guidance begins to paint the picture of what I’m advocating for. But something like Vox Newsletters aren’t an adequate way to study the effectiveness of global dev. The real issue at hand is what the upside of formal organization around analyzing dev effectiveness could be, i.e. a Center for Election Science for development, or Open Phil announcing a dev Focus Area.
First and foremost, I think that there is a high upside to simply studying what the current impact of the dev sector is. This was the idea behind bringing up the orders of magnitude of difference between EA and dev earmarked capital. It’s not about deciding where a new donation goes. Nor is accurate to frame it as deciding between managing ‘$1mil in domestic versus global health’. The reality is that there is trillions of dollars locked within dev programs that often have tenuous connections to impact. Making these programs just 1% percent more efficient could have massive impact potential relative to the small amount of preexisting capital EA has at play.
The broader point behind addressing these larger capital chunks, and working directly on improving the efficiencies of mainstream problem areas is that the Overton window model of altruism suggests that people will always donate to ‘inefficient’ charities. Instead of turning away from this and forming its own bubble, EA might stand to gain a lot by addressing mainstream behaviors more directly. Shifting the curve to the right instead of building up from scratch might be easier.