If more people adopted an EA mindset within the causes they already care about, that would probably be a good thing. But my goal isn’t to get more people to adopt EA ideas; my goal is to do whatever’s most effective. Is focusing on movement building targeted at cause-specific people the most effective thing to do?
You don’t really argue for this claim and I don’t know if you believe it; but I’m pretty sure it’s false. Consider, for example, that the most effective global poverty charities are about 100 to 1000 times more effective than the most effective charities in other popular sectors of giving. The difference probably matters even more in some causes—I would posit that SCI probably does 10,000 to a million times more good than the best arts charity. That means if you can convince one person to give to SCI, that’s as good as convincing 10,000 arts enthusiasts to make donations more effectively within the arts. One of these sounds a lot easier than the other.
This post seems to argue that we should prefer the world in which cause-committed people still care about effectiveness than the one in which they don’t. It’s not that you’re wrong; I agree with most of what you say. But I don’t think you’re looking at this in the right way. The question is not, what world is best? But what can I do that has the greatest impact? I don’t get to choose what world we live in, I just get to choose what I do. And I don’t think I should spend my time trying to convince people in suboptimal causes to donate to better organizations within those causes.
Maybe you’re not convinced by what I’ve written so far because you think cause neutrality is so central to EA identity that no true EA leader would ever countenance any distraction from it.
I’m quoting this sentence because I believe it’s a great example of how you’re not framing this correctly. EA is not about holding onto an identity and rejecting anything that challenges it. EA is about doing the most good. It’s not about having a centrally defined mission that you must adhere to. It’s about doing the most good.EA is a question, not an ideology.
Consider, for example, that the most effective global poverty charities are about 100 to 1000 times more effective than the most effective charities in other popular sectors of giving.
Citation please. I believe this claim is false. E.g. one of the most popular foundation grant areas is life sciences research. This GiveWell post ballparks generic cancer research (a relatively very heavily funded field that has gotten relatively weak results) as being less than 100x, and suggests that particularly effective biomedical research could be orders of magnitude more effective than that.
I would posit that SCI probably does 10,000 to a million times more good than the best arts charity.
I say that this is quite unlikely. GiveWell estimates the benefits of SCI as about 5-10x the direct benefits of cash transfers. Cash transfers might give a 30x multiplier, but not 100x, I think.
And I am confident that the best arts projects generate more benefits for the beneficiaries than simply handing over the cash would. Consider something like loosening the Mickey Mouse copyright stranglehold and releasing old works into the public domain. The consumer surplus from such increased access to and creation of art could be quite large relative to the investment.
There is also the intersection of art and other important causes and issues. The arts (especially mass media like film-making) have played important roles in drawing attention to a variety of problems, and such art can be specifically supported.
From a cause-neutral point of view these are unlikely to be the very top priorities but the numbers you give above are not plausible to me.
People frequently behave as though EA is an ideology. I believe they ought not behave this way; we will do more good if we focus on doing good and not on the dogmas that inevitably arise out of the EA community. I myself am guilty of this: when responding to OP, I originally wanted to “defend my tribe” and say OP is bad and EA is good. I re-wrote my comment several times to focus on the fact that OP is actually good and saying valuable things, and to formulate a constructive and cooperative comment, instead of just defending my tribe. I believe this is a good thing and I ought to try to do this more often, and I can probably do better.
People often behave as though EA is an ideology, not a question; this is generally harmful.
I can probably mitigate the effects of (1) by reminding people to think of EA as a question, not an ideology; which is why I did so in my previous comment.
If we have more modest numbers we might get to a similar conclusion though.
e.g. suppose the distribution of cost-benefit ratios looks like this:
Typical US charity: 1
Good US charity: 10
Typical international charity: 20
GiveDirectly: 30
Biomedical research, US policy advocacy: 100
Best international charities (i.e. AMF): 300
Good meta-charity, xrisk, advocacy etc: 1500+
Then, moving someone from a typical US focused charity to a good one produces an extra 9 units of impact per dollar; whereas moving them to the best international charity produces 299.
So you need to persuade 33 times as many people (299/9) to switch to the best thing within the US vs number of people you can persuade to switch to the best international development charity.
At the current margin, it seems substantially easier to me to persuade one person to change cause towards international health and support the best charity in the area, than to persuade 33 US focused donors to choose the best thing in their area.
So, it seems like finding people willing to switch cause should be the community’s key priority until we’re at least 1-2 orders of magnitude bigger.
There are some other important strategic priorities that tell in favor of focus, such as (i) community building—it’s easier and safer to form the community around a well-coordinated, dedicated core of people rather than a wider base that is more vulnerable to dilution and fracturing (ii) delayability—it seems possible to add in cause-specific efforts in the future.
However, I think there are good arguments for putting a small amount of resources into a wider range of causes—such as (i) information value (i.e. learning about a wider range of areas) (ii) building expertise (we want to learn about lots of areas and skills since we’ll need this knowledge in the future) (iii) improving the brand (having some material for a wide range of areas makes it clear that we really do care about all ways of doing good (iv) creating stepping stones (you might be able to get some people involved with cause-specific content who will switch cause later on). Fortunately, this is already happening to some degree.
(First, really stupid question—not sure I understand the math here? Why wouldn’t switching from typical US to good US produce 9 extra units of impact per your assumption, not 4?)
Anyway, regarding this:
At the current margin, it seems substantially easier to me to persuade one person to change cause towards international health and support the best charity in the area, than to persuade 49 US focused donors to choose the best thing in their area.
I think one thing you’re not taking into account is that not all EA community members are interchangeable; different people have different leverage within their communities. It would be trivial for me to motivate 49 arts enthusiasts to switch donations to a better US charity in the arts, given that I run a publication with roughly 10k total followers and several hundred “true fans.” There are analogous people in other domains across the spectrum. So one approach to outreach could largely involve finding and forming partnerships with aligned, influential individuals in those domains.
Sorry I changed some of the numbers mid-way while writing then forgot to change the others to be in line. The’re updated now.
I think one thing you’re not taking into account is that not all EA community members are interchangeable; different people have different leverage within their communities. It would be trivial for me to motivate 49 arts enthusiasts to switch donations to a better US charity in the arts, given that I run a publication with roughly 10k total followers and several hundred “true fans.” There are analogous people in other domains across the spectrum. So one approach to outreach could largely involve finding and forming partnerships with aligned, influential individuals in those domains.
I agree—I was focusing on what the core focus of the community should be, but if people have a comparative advantage in an area that gives them 10-100x more leverage, it could outweigh. I can also see we might have underweighted the size of these differences in the past.
That’s interesting, Carl, I wouldn’t have necessarily thought of copyright reform as one of the highest-impact arts interventions, but the consumer surplus angle is intriguing. This exchange is actually a great example of how domains can benefit from participation in this community.
I also want to throw another thought out there: it’s not inconceivable to me that we might find the most effective way to support the arts in the world is to, say, give cash transfers to poor people in Africa. Or put resources towards some other broad, systemic issue that affects everyone but is disproportionately relevant in the domain of the arts. If people in the effective altruist community said that, everyone would freak out and think you’re just throwing stuff at a wall to get people to switch donations away from the arts. But if an entity with authentic roots in the arts said that, the reaction would be quite different. See, for example, this: http://creativz.us/2016/02/02/what-artists-actually-need-is-an-economy-that-works-for-everyone/ Furthermore, Createquity would only come to that conclusion after researching the other major interventions and causes within the arts that people already care about, so we would have a much more concrete comparative case to make.
As always, everything I’m saying here potentially applies in other cause areas as well. I know we’re talking about the arts a lot in this thread because that’s my background and what I know best, but I don’t think any of this is less true for, e.g., higher education or local social services.
I also want to throw another thought out there: it’s not inconceivable to me that we might find the most effective way to support the arts in the world is to, say, give cash transfers to poor people in Africa. Or put resources towards some other broad, systemic issue that affects everyone but is disproportionately relevant in the domain of the arts.
For a related take on advancing science, see this.
If we mean ‘the arts’ in general and over time, I think this is extremely likely. Basically that would mean working to reduce existential risk, in my view. The long-run artistic achievements of civilization (provided that it survives and retains any non-negligible interest in art) will be many orders of magnitude more numerous, and much higher in peak quality, than those we have seen so far.
I was taking you to mean ‘the arts’ as something like a constraint on the degree of indirectness, patience, etc, that one accepts. E.g. ‘the arts for my community now, not for foreigners or future times, via methods that are connected reasonably closely to the arts world.’
Regarding loosening copyright, it’s not just letting people enjoy the old works, but enabling new creation.
Michael, a large part of my argument rests on the premise that the EA community has grown to the point where it is capable of walking and chewing gum at the same time. You seem to be viewing this through an individual scarcity lens where we only have one choice to make about which action we’re going to take, and it has to be the most effective one. I disagree. I see EA as a diverse, multifaceted movement with many assets that can be deployed toward the collective good. This piece is about how those resources can be collectively deployed most effectively, which is a different question from “how can I do the most good.”
Imagine chewing gum is an unbelievably effective cause: it’s life-saving impact is many orders of magnitude higher than walking. If we want to maximise chewing gum to the fullest we cannot have any distractions, not even potential or little ones. Walking has opportunity costs and prevents us from extremely super effective gum chewing.
This piece is about how those resources can be collectively deployed most effectively, which is a different question from “how can I do the most good.”
Michael’s post still applies. Collective resources are just a sum of many individuals and everyone/every group contemplating their marginal impact ideally includes other EAs’ work in their considerations. The opportunity cost bit applies both to individuals and groups (or the entire movement)
Any unit EA resource spent by x people has opportunity costs.
Can you walk me through your reasoning of why the marginal value of encouraging the practice of effective altruism within domains is not likely to be greater than the marginal opportunity cost of doing so?
Because we could work on more effective causes with these resources.
See Michael’s
The difference probably matters even more in some causes—I would posit that SCI probably does 10,000 to a million times more good than the best arts charity. That means if you can convince one person to give to SCI, that’s as good as convincing 10,000 arts enthusiasts to make donations more effectively within the arts. One of these sounds a lot easier than the other.
Spreading EA thinking within domains is an idea for an intervention in the EA outreach cause. I don’t think the good per unit time invested (=impact) can compete with already existing EA interventions
So, are you arguing that investing in EA outreach in domain-specific ways can’t compete or that investing in EA outreach at all can’t compete? Your last paragraph sounds like you’re saying the latter, but I find that to be a rather nonsensical position if you think that correctly targeted donations are so highly leveraged.
If the claim is that domain-specific EA outreach is less effective per unit invested than cause neutral EA outreach, keep in mind that I argue domain-specific EA outreach will grow the movement faster/more than the alternative, which in turn creates more resources that can be deployed toward further outreach (or other helpful functions, like operations or research). Depending on your assumptions about the ratio between the total ceiling of cause-neutral people and domain-specific people out there, that growth factor could be extremely significant to EA’s total impact on the world.
The former; outreach is great. It would probably be better if you argued in the thread above to collect your thoughts in one place, since I share Ben Todd’s opinion and he put it much better than I could.
I enjoyed reading your well thought out post by the way!
Maybe I am missing something here, but—given your post and your arguments—how does it follow that the EA movement should not endorse case-specific effective altruism?
If I understand the “EA mission” correctly, it is about doing the most good in total. The original poster seems to believe that EA endorsing case-specific effective altruism will do more good than if they don’t (overall). Hence, if you disagree, you should argue why it would be better for EA to not endorse this. Where am I making a mistake in this logic?
My own intuition (which I tried to hint at in my first post) is that any official endorsement of case-specific effective altruism on behalf of EA would take away too much from the core of EA to be worth it. YES, the world would be better, if everyone applied the EA core values to their own field, BUT ressources are too tight—or it might be too distracting -- to devote any attention to such “secondary” causes. (That being said, I am very much aware that my intuition might be wrong!)
What does it mean for the EA movement to endorse something? If that just means that I should say cause-specific effective altruism is a good thing, then okay, I hereby declare that cause-specific effective altruism is a good thing. But if you mean that I should spend my limited time campaigning to convince people in causes like the arts to focus on more effective interventions within their own cause, then I think it’s pretty clear that I should not do that.
Michael, I’ll clarify what actions from the EA community I am specifically I am making a case for. I am arguing two things:
1) people who are already invested in EA outreach ought to consider strategies that reach and activate people invested in specific domains; and
2) people who are invested in EA in general, but not in EA outreach specifically, ought to recognize the value of 1).
Now, those “ought tos” are of course contingent upon your agreement with the specific arguments and assumptions that I lay out in the piece. But I am not trying to convince you, specifically, to campaign for domain-specific EA except to the extent that you’re campaigning for EA already and not 100% successful in those efforts.
Your figures sounds too high. Remember we are comparing the best arts charity, not the average arts charity to SCI. In order to make such a comparison, we’d have to write down believable impacts from arts and from SCI for a particular amount of money invested, then we’d actually be able to make this kind of comparison.
I’m not saying that I’m in favour of Effective Altruism engaging in domain-specific effective altruism, just that we would need a more nuanced comparison to evaluate this claim. Anyway, even if we were to loosen up EA, it seems that allowing arts to count as EA would be going too far, to the point of damaging our credibility.
If more people adopted an EA mindset within the causes they already care about, that would probably be a good thing. But my goal isn’t to get more people to adopt EA ideas; my goal is to do whatever’s most effective. Is focusing on movement building targeted at cause-specific people the most effective thing to do?
You don’t really argue for this claim and I don’t know if you believe it; but I’m pretty sure it’s false. Consider, for example, that the most effective global poverty charities are about 100 to 1000 times more effective than the most effective charities in other popular sectors of giving. The difference probably matters even more in some causes—I would posit that SCI probably does 10,000 to a million times more good than the best arts charity. That means if you can convince one person to give to SCI, that’s as good as convincing 10,000 arts enthusiasts to make donations more effectively within the arts. One of these sounds a lot easier than the other.
This post seems to argue that we should prefer the world in which cause-committed people still care about effectiveness than the one in which they don’t. It’s not that you’re wrong; I agree with most of what you say. But I don’t think you’re looking at this in the right way. The question is not, what world is best? But what can I do that has the greatest impact? I don’t get to choose what world we live in, I just get to choose what I do. And I don’t think I should spend my time trying to convince people in suboptimal causes to donate to better organizations within those causes.
I’m quoting this sentence because I believe it’s a great example of how you’re not framing this correctly. EA is not about holding onto an identity and rejecting anything that challenges it. EA is about doing the most good. It’s not about having a centrally defined mission that you must adhere to. It’s about doing the most good. EA is a question, not an ideology.
Citation please. I believe this claim is false. E.g. one of the most popular foundation grant areas is life sciences research. This GiveWell post ballparks generic cancer research (a relatively very heavily funded field that has gotten relatively weak results) as being less than 100x, and suggests that particularly effective biomedical research could be orders of magnitude more effective than that.
I say that this is quite unlikely. GiveWell estimates the benefits of SCI as about 5-10x the direct benefits of cash transfers. Cash transfers might give a 30x multiplier, but not 100x, I think.
And I am confident that the best arts projects generate more benefits for the beneficiaries than simply handing over the cash would. Consider something like loosening the Mickey Mouse copyright stranglehold and releasing old works into the public domain. The consumer surplus from such increased access to and creation of art could be quite large relative to the investment.
There is also the intersection of art and other important causes and issues. The arts (especially mass media like film-making) have played important roles in drawing attention to a variety of problems, and such art can be specifically supported.
From a cause-neutral point of view these are unlikely to be the very top priorities but the numbers you give above are not plausible to me.
People frequently behave as though EA is an ideology. I believe they ought not behave this way; we will do more good if we focus on doing good and not on the dogmas that inevitably arise out of the EA community. I myself am guilty of this: when responding to OP, I originally wanted to “defend my tribe” and say OP is bad and EA is good. I re-wrote my comment several times to focus on the fact that OP is actually good and saying valuable things, and to formulate a constructive and cooperative comment, instead of just defending my tribe. I believe this is a good thing and I ought to try to do this more often, and I can probably do better.
People often behave as though EA is an ideology, not a question; this is generally harmful.
I can probably mitigate the effects of (1) by reminding people to think of EA as a question, not an ideology; which is why I did so in my previous comment.
These are great points.
If we have more modest numbers we might get to a similar conclusion though.
e.g. suppose the distribution of cost-benefit ratios looks like this:
Typical US charity: 1
Good US charity: 10
Typical international charity: 20
GiveDirectly: 30
Biomedical research, US policy advocacy: 100
Best international charities (i.e. AMF): 300
Good meta-charity, xrisk, advocacy etc: 1500+
Then, moving someone from a typical US focused charity to a good one produces an extra 9 units of impact per dollar; whereas moving them to the best international charity produces 299.
So you need to persuade 33 times as many people (299/9) to switch to the best thing within the US vs number of people you can persuade to switch to the best international development charity.
At the current margin, it seems substantially easier to me to persuade one person to change cause towards international health and support the best charity in the area, than to persuade 33 US focused donors to choose the best thing in their area.
So, it seems like finding people willing to switch cause should be the community’s key priority until we’re at least 1-2 orders of magnitude bigger.
There are some other important strategic priorities that tell in favor of focus, such as (i) community building—it’s easier and safer to form the community around a well-coordinated, dedicated core of people rather than a wider base that is more vulnerable to dilution and fracturing (ii) delayability—it seems possible to add in cause-specific efforts in the future.
However, I think there are good arguments for putting a small amount of resources into a wider range of causes—such as (i) information value (i.e. learning about a wider range of areas) (ii) building expertise (we want to learn about lots of areas and skills since we’ll need this knowledge in the future) (iii) improving the brand (having some material for a wide range of areas makes it clear that we really do care about all ways of doing good (iv) creating stepping stones (you might be able to get some people involved with cause-specific content who will switch cause later on). Fortunately, this is already happening to some degree.
Edited to change some of the numbers
(First, really stupid question—not sure I understand the math here? Why wouldn’t switching from typical US to good US produce 9 extra units of impact per your assumption, not 4?)
Anyway, regarding this:
I think one thing you’re not taking into account is that not all EA community members are interchangeable; different people have different leverage within their communities. It would be trivial for me to motivate 49 arts enthusiasts to switch donations to a better US charity in the arts, given that I run a publication with roughly 10k total followers and several hundred “true fans.” There are analogous people in other domains across the spectrum. So one approach to outreach could largely involve finding and forming partnerships with aligned, influential individuals in those domains.
Sorry I changed some of the numbers mid-way while writing then forgot to change the others to be in line. The’re updated now.
I agree—I was focusing on what the core focus of the community should be, but if people have a comparative advantage in an area that gives them 10-100x more leverage, it could outweigh. I can also see we might have underweighted the size of these differences in the past.
That’s interesting, Carl, I wouldn’t have necessarily thought of copyright reform as one of the highest-impact arts interventions, but the consumer surplus angle is intriguing. This exchange is actually a great example of how domains can benefit from participation in this community.
I also want to throw another thought out there: it’s not inconceivable to me that we might find the most effective way to support the arts in the world is to, say, give cash transfers to poor people in Africa. Or put resources towards some other broad, systemic issue that affects everyone but is disproportionately relevant in the domain of the arts. If people in the effective altruist community said that, everyone would freak out and think you’re just throwing stuff at a wall to get people to switch donations away from the arts. But if an entity with authentic roots in the arts said that, the reaction would be quite different. See, for example, this: http://creativz.us/2016/02/02/what-artists-actually-need-is-an-economy-that-works-for-everyone/ Furthermore, Createquity would only come to that conclusion after researching the other major interventions and causes within the arts that people already care about, so we would have a much more concrete comparative case to make.
As always, everything I’m saying here potentially applies in other cause areas as well. I know we’re talking about the arts a lot in this thread because that’s my background and what I know best, but I don’t think any of this is less true for, e.g., higher education or local social services.
For a related take on advancing science, see this.
If we mean ‘the arts’ in general and over time, I think this is extremely likely. Basically that would mean working to reduce existential risk, in my view. The long-run artistic achievements of civilization (provided that it survives and retains any non-negligible interest in art) will be many orders of magnitude more numerous, and much higher in peak quality, than those we have seen so far.
I was taking you to mean ‘the arts’ as something like a constraint on the degree of indirectness, patience, etc, that one accepts. E.g. ‘the arts for my community now, not for foreigners or future times, via methods that are connected reasonably closely to the arts world.’
Regarding loosening copyright, it’s not just letting people enjoy the old works, but enabling new creation.
Michael, a large part of my argument rests on the premise that the EA community has grown to the point where it is capable of walking and chewing gum at the same time. You seem to be viewing this through an individual scarcity lens where we only have one choice to make about which action we’re going to take, and it has to be the most effective one. I disagree. I see EA as a diverse, multifaceted movement with many assets that can be deployed toward the collective good. This piece is about how those resources can be collectively deployed most effectively, which is a different question from “how can I do the most good.”
Imagine chewing gum is an unbelievably effective cause: it’s life-saving impact is many orders of magnitude higher than walking. If we want to maximise chewing gum to the fullest we cannot have any distractions, not even potential or little ones. Walking has opportunity costs and prevents us from extremely super effective gum chewing.
Michael’s post still applies. Collective resources are just a sum of many individuals and everyone/every group contemplating their marginal impact ideally includes other EAs’ work in their considerations. The opportunity cost bit applies both to individuals and groups (or the entire movement)
Any unit EA resource spent by x people has opportunity costs.
Can you walk me through your reasoning of why the marginal value of encouraging the practice of effective altruism within domains is not likely to be greater than the marginal opportunity cost of doing so?
Because we could work on more effective causes with these resources. See Michael’s
Spreading EA thinking within domains is an idea for an intervention in the EA outreach cause. I don’t think the good per unit time invested (=impact) can compete with already existing EA interventions
So, are you arguing that investing in EA outreach in domain-specific ways can’t compete or that investing in EA outreach at all can’t compete? Your last paragraph sounds like you’re saying the latter, but I find that to be a rather nonsensical position if you think that correctly targeted donations are so highly leveraged.
If the claim is that domain-specific EA outreach is less effective per unit invested than cause neutral EA outreach, keep in mind that I argue domain-specific EA outreach will grow the movement faster/more than the alternative, which in turn creates more resources that can be deployed toward further outreach (or other helpful functions, like operations or research). Depending on your assumptions about the ratio between the total ceiling of cause-neutral people and domain-specific people out there, that growth factor could be extremely significant to EA’s total impact on the world.
The former; outreach is great. It would probably be better if you argued in the thread above to collect your thoughts in one place, since I share Ben Todd’s opinion and he put it much better than I could. I enjoyed reading your well thought out post by the way!
Maybe I am missing something here, but—given your post and your arguments—how does it follow that the EA movement should not endorse case-specific effective altruism?
If I understand the “EA mission” correctly, it is about doing the most good in total. The original poster seems to believe that EA endorsing case-specific effective altruism will do more good than if they don’t (overall). Hence, if you disagree, you should argue why it would be better for EA to not endorse this. Where am I making a mistake in this logic?
My own intuition (which I tried to hint at in my first post) is that any official endorsement of case-specific effective altruism on behalf of EA would take away too much from the core of EA to be worth it. YES, the world would be better, if everyone applied the EA core values to their own field, BUT ressources are too tight—or it might be too distracting -- to devote any attention to such “secondary” causes. (That being said, I am very much aware that my intuition might be wrong!)
What does it mean for the EA movement to endorse something? If that just means that I should say cause-specific effective altruism is a good thing, then okay, I hereby declare that cause-specific effective altruism is a good thing. But if you mean that I should spend my limited time campaigning to convince people in causes like the arts to focus on more effective interventions within their own cause, then I think it’s pretty clear that I should not do that.
Michael, I’ll clarify what actions from the EA community I am specifically I am making a case for. I am arguing two things:
1) people who are already invested in EA outreach ought to consider strategies that reach and activate people invested in specific domains; and
2) people who are invested in EA in general, but not in EA outreach specifically, ought to recognize the value of 1).
Now, those “ought tos” are of course contingent upon your agreement with the specific arguments and assumptions that I lay out in the piece. But I am not trying to convince you, specifically, to campaign for domain-specific EA except to the extent that you’re campaigning for EA already and not 100% successful in those efforts.
Your figures sounds too high. Remember we are comparing the best arts charity, not the average arts charity to SCI. In order to make such a comparison, we’d have to write down believable impacts from arts and from SCI for a particular amount of money invested, then we’d actually be able to make this kind of comparison.
I’m not saying that I’m in favour of Effective Altruism engaging in domain-specific effective altruism, just that we would need a more nuanced comparison to evaluate this claim. Anyway, even if we were to loosen up EA, it seems that allowing arts to count as EA would be going too far, to the point of damaging our credibility.