In addition, some people might perceive the “guide dogs vs. trachoma surgeries” example as ableist, or might think that EAs are suggesting that governments spend less on handicapped people and more on foreign aid. (This is a particularly significant issue in Germany, where there have been lots of protests by disability rights advocates against Singer, also more recently when he gave talks about EA.)
In fact, one of the top google hits for “guide dog vs trachoma surgery” is this:
The philosopher says funding should go toward prevention instead of guide-dog training. Activists for the blind, of course, disagree.
For these reasons, I suggest not using the guide dog example at all anymore.
The above article also makes the following, interesting point:
Many people are able to function in society at a much higher level than ever before because of service dogs and therapy dogs. You would think that’s a level of utility that would appeal to Singer, but he seems to have a blind spot of his own in that respect.
This suggests that both guide dogs and trachoma surgeries cause significant flow-through effects. All of these points combined might decrease the effectiveness difference from 1000x to something around 5x-50x (see also Why Charities Don’t Differ Astronomically in Cost-Effectiveness).
I don’t understand the objection about it being “ableist” to say funding should go towards preventing people becoming blind rather than training guide dogs
If “ableism” is really supposed to be like racism or sexism, then we should not regard it as better to be able to see than to have the disability of not being able to see. But if people who cannot see are no worse off than people who can see, why should we even provide guide dogs for them? On the other hand, if—more sensibly—disability activists think that people who are unable to see are at a disadvantage and need our help, wouldn’t they agree that it is better to prevent many people—say, 400 -- experiencing this disadvantage than to help one person cope a little better with the disadvantage? Especially if the 400 are living in a developing country and have far less social support than the one person who lives in a developed country?
Can someone explain to me what is wrong with this argument? If not, I plan to keep using the example.
Here’s what I usually found most unfortunate about the comparison, but I don’t mean to compete with anyone who thinks that the math is more unfortunate or anything else.
The decision to sacrifice the well-being of one person for that of others (even many others) should be hard. If we want to be trusted (and the whole point of GiveWell is that people don’t have the time to double-check all research no matter how accessible it is – plus, even just following a link to GiveWell after watching a TED Talk requires that someone trusts us with their time), we need to signal clearly that we don’t make such decisions lightly. It is honest signaling too, since the whole point of EA is to put a whole lot more effort into the decision than usual. Many people I talk to are so “conscientious” about such decisions that they shy away from them completely (implicitly making very bad decisions). It’s probably impossible to show just how much effort and diligence has gone into such a difficult decision in just a short talk, so I rather focus on cases where I am, or each listener is, the one at whose detriment we make the prioritization decision, just like in the Child in the Pond case. Few people would no-platform me because they think it’s evil of me to ruin my own suit.
Sacrificing oneself, or rather some trivial luxury of oneself, also avoids the common objection why a discriminated against minority should have to pay when there are [insert all the commonly cited bad things like tax cuts for the most wealthy, military spending, inefficient health system, etc.]. It streamlines the communication a lot more.
The group at whose detriment we need to decide should never be a known, discriminated against minority in such examples, because these people are used to being discriminated against and their allies are used to seeing them being discriminated against, so when someone seems to be saying that they shouldn’t receive some form of assistance, they have just a huge prior for assuming that it’s just another discriminatory attack. I think their heuristic more or less fails in this case, but that is not to say that it’s not a very valid heuristic. I’ve been abroad in a country where pedestrian crosswalks are generally ignored by car drivers. I’m not going to just blinding walk onto the street there even if the driver of the only car coming toward me is actually one who would’ve stopped for me if I did. My heuristic fails in that case, but it generally keeps me safe.
Discriminated minority groups are super few, especially the ones the audience will be aware of. Some people may be able to come up with a dozen or so, some with several dozens. But in my actual prioritization decisions for the Your Siblings charity, I had to decide between groups of so fuzzy reference classes that there must be basically arbitrarily many of such groups. Street children vs. people at risk of malaria vs. farmed animals? Or street children in Kampala vs. people at risk of malaria in the southern DRC vs. chickens farmed for eggs in Spain? Or street children of the lost generation in the suburb’s of Kampala who were abducted for child sacrifice but freed by the police and delivered to the orphanage we’re cooperating with vs. …. You get the idea. If we’re unbiased, then what are the odds that we’ll draw a discriminated against group from the countless potential examples in this urn? This should heavily update a listener toward thinking that there’s some bias against the minority group at work here. Surely, the real explanation is something about salience on our minds or ease of communication and not about discrimination, but they’d have to know us very well to have so much trust in our intentions.
People with disability probably have distance “bias” at the same rates as anyone else, so they’ll perceive the blind person with the guide dog as in-group, the blind people suffering from cataracts in developing countries as completely neutral foreign group, and us as attacking them, making us the out-group. Such controversy is completely avoidable and highly dangerous, as Owen Cotton-Barratt describes in more detail in his paper on movement growth. Controversy breeds an opposition (and one that is not willing to engage in moral trade with us) that destroys option value particularly by depriving us of the highly promising option to draw on the democratic process to push for the most uncontroversial implications of effective altruism that we can find. Scott Alexander has written about it under the title “Toxoplasma of Rage.” I don’t think publicity is worth sacrificing the political power of EA for it, but that is just a great simplification of Owen Cotton-Barratt’s differentiated points on the topic.
Communication is by necessity cooperative. If we say something, however true it may be, and important members of the audience understand it as something false or something else entirely (that may not have propositional nature), then we failed to communicate. When this happens, we can’t just stamp our collective foot on the ground and be like, “But it’s true! Look at the numbers!” or “It’s your fault you didn’t understand me because you don’t know where I’m coming from!” That’s not the point of communication. We need to adapt our messaging or make sure that people at least don’t misunderstand us in dangerous ways.
(I feel like you may disagree on some of these points for similar reasons that The Point of View of the Universe seemed to me to argue for a non-naturalist type of moral realism while I “only” try to assume some form of non-cognitivist moral antirealism, maybe emotivism, which seems more parsimonious to me. Maybe you feel like or have good reasons to think that there is a true language (albeit in a non-naturalist sense) so that it makes sense to say “Yes, you misunderstood me, but what I said is true, because …,” while I’m unsure. I might say, “Yes, you misunderstood me, but what I meant was something you’d probably agree with. Let me try again.”)
Blind people are not a discriminated group, at least not in the first world. The extreme poor, on the other hand, often face severe discrimination—they are mistreated and have their rights violated by those with power, especially if they are Indians of low caste.
Comparative intervention effectiveness is a pillar of EA, distinct from personal sacrifice, so they are not interchangeable. I reject that there is some sort of prejudice for choosing to help one group over another, whether the groups are defined by physical condition, location, etc. One always has to choose. No one can help every group. Taking the example of preventing blindness vs assisting the blind, clearly the former is the wildly superior intervention for blindness so it is absurd to call it prejudiced against the blind.
Thanks! In response to which point is that? I think points 5 and 6 should answer your objection, but tell me if they don’t. Truth is not at issue here (if we ignore the parenthetical at the very end that isn’t mean to be part of my argument). I’d even say that Peter Singer deals in concepts of unusual importance and predictive power. But I think it’s important to make sure that we’re not being misunderstood in dangerous ways by valuable potential allies.
The objection about it being ableist to promote funding for trachoma surgeries rather than guide dogs doesn’t have to do with how many QALYs we’d save from providing someone with a guide dog or a trachoma surgery. Roughly, this objection is about how much respect we’re showing to disabled people. I’m not sure how many of the people who have said that this example is ableist are utilitarians, but we can actually make a good case that using the example causes negative consequences for the reason that it’s ableist. (It’s also possible that using the example as it’s typically used causes negative consequences by affecting how intellectually rigorous EA is, but that’s another topic). A few different points that might be used to support this argument would be:
On average, people get a lot of value out of having self-esteem; often, having more self-esteem on the margins enables them to do value-producing things they wouldn’t have done otherwise (flow-through effects!). Sometimes, it just makes them a bit happier (probably a much smaller effect in utilitarian terms).
Roughly, raising or lowering the group-wise esteem of a group has an effect on the self-esteem of some of the group’s members.
Keeping from lowering a group’s esteem isn’t very costly, if doing so involves nothing more than using a different tone. (There are of course situations where making a certain claim will raise or lower a group’s esteem a large amount if a certain tone is used, and a lesser amount if a different tone is used, even though the group’s esteem is nevertheless changed in the same direction in either case).
Decreases in a group’s ability to do value-producing things or be happy because their esteem has been lowered by someone acting in an ablelist manner, do not cause others to experience a similarly sized boost to their ability to be happy or do value-producing things. (I.e. the truth value of claims that “status games are zero sum” has little effect on the extent to which it’s true that decreasing a group’s esteem by e.g. ableist remarks has negative utilitarian consequences).
I’ve generally found it hard to make this sort of observation publicly in EA-inhabited spaces, since I typically get interpreted as primarily trying to say something political, rather than primarily trying to point out that certain actions have certain consequences. It’s legitimately hard to figure out what the ideal utilitarian combination of tone and example would be for this case, but it’s possible to iterate towards better combinations of the two as you have time to try different things according to your own best judgement, or just ask a critic what the most hurtful parts of an example are.
Peter, even if a trachoma operation cost the same as training a guide dog, and didn’t always prevent blindness, it would still be an excellent cost comparison because vision correction is vastly superior to having a dog.
If I try to steelman the argument, it comes out something like:
Some people, when they hear about the guide dog—tracheoma surgery contrast, will take the point to be that ameliorating a disability is intrinsically less valuable than preventing or curing an impairment. (In other words, that helping people live fulfilling lives while blind is necessarily a less worthy cause than “fixing” them.) Since this is not in fact the intended point, a comparison of more directly comparable interventions would be preferable, if available.
Why is the choice not directly comparable? If it were possible to offer a blind person a choice between being able to see, or having a guide dog, would it be so difficult for the blind person to choose?
Still, if you can suggest better comparisons that make the same point, I’ll be happy to use them.
It still seems possible to save QALYs for a few hundred dollars in the developing world, whereas the UK’s NHS is willing to fund most things that save a QALY for under £20,000, and some that are over £30,000, which is again a factor of 100 difference.
I like these examples but they do have some limitations.
I’m still searching for some better examples that are empirically robust as well as intuitively powerful.
(I’m looking for the strongest references to give to my claim here that “there is a strong case that most donations go to charities that improve well-being far less per-dollar than others.” (of course I’m willing to admit there’s some possibility that we don’t have strong evidence for this)
1) Differences in income: This will not be terribly convincing to anyone who doesn’t already accept the idea of vastly diminishing marginal utility, and there is the standard (inadequate but hard to easily rebut) objection that “things are much cheaper in developing countries”.
2) The cost to save a life: Yes, rich country governments factor this into their calculations, but is this indeed the calculation that is relevant when considering “typical charities operating in rich countries?” It also does not identify a particular intervention that is “much less efficient”.
3) Cost per QALY/ UK NHS: Similar limitations as in case 2.
What is the strongest statistic or comparison for making this point? Perhaps Sanjay Joshi of SoGive has some suggestions?
And those flow-through effects may often be bigger per person when helping people in richer countries, since people in richer countries tend to have more impact on economic growth, technological and memetic developments, etc. than those in poorer countries. (Of course, whether these flow-through effects are net good or net bad is not clear.)
On the ableism point, my best guess is that the right response is to figure out the substance of the criticism. If we disagree, we should admit that openly, and forgo the support of people who do not in fact agree with us. If we agree, then we should account for the criticism and adjust both our beliefs and statements. Directly optimizing on avoiding adverse perceptions seems like it would lead to a distorted picture of what we are about.
Singer’s idea about the relative value of guide dogs sets up a false dichotomy, assuming that you can fund guide dogs or fund medical prevention. In fact, you can do both.
In this case that seems to be the substance of the criticism. You can’t anticipate every counter-argument one could make when talking to bigger audiences, but this one is pretty common. It might be necessary to say
if I have to decide where to donate my $ 100...
Not sure it would help, it could be that such arguments trigger bad emotions for other reasons and the counter-arguments we hear are just rationalizations of those emotions. It does feel like a minefield.
Therefore, when comparing any 2 charities while introducing someone (especially an audience) to EA, we must phrase it carefully and sensitively. BTW, I think there is something to learn from way Singer phrased it in the TED talk:
Take, for example, providing a guide dog for a blind person. That’s a good thing to do, right? Well, right, it is a good thing to do, but you have to think what else you could do with the resources. It costs about 40,000 dollars...
I agree with those concerns.
In addition, some people might perceive the “guide dogs vs. trachoma surgeries” example as ableist, or might think that EAs are suggesting that governments spend less on handicapped people and more on foreign aid. (This is a particularly significant issue in Germany, where there have been lots of protests by disability rights advocates against Singer, also more recently when he gave talks about EA.)
In fact, one of the top google hits for “guide dog vs trachoma surgery” is this:
For these reasons, I suggest not using the guide dog example at all anymore.
The above article also makes the following, interesting point:
This suggests that both guide dogs and trachoma surgeries cause significant flow-through effects. All of these points combined might decrease the effectiveness difference from 1000x to something around 5x-50x (see also Why Charities Don’t Differ Astronomically in Cost-Effectiveness).
I don’t understand the objection about it being “ableist” to say funding should go towards preventing people becoming blind rather than training guide dogs
If “ableism” is really supposed to be like racism or sexism, then we should not regard it as better to be able to see than to have the disability of not being able to see. But if people who cannot see are no worse off than people who can see, why should we even provide guide dogs for them? On the other hand, if—more sensibly—disability activists think that people who are unable to see are at a disadvantage and need our help, wouldn’t they agree that it is better to prevent many people—say, 400 -- experiencing this disadvantage than to help one person cope a little better with the disadvantage? Especially if the 400 are living in a developing country and have far less social support than the one person who lives in a developed country?
Can someone explain to me what is wrong with this argument? If not, I plan to keep using the example.
Here’s what I usually found most unfortunate about the comparison, but I don’t mean to compete with anyone who thinks that the math is more unfortunate or anything else.
The decision to sacrifice the well-being of one person for that of others (even many others) should be hard. If we want to be trusted (and the whole point of GiveWell is that people don’t have the time to double-check all research no matter how accessible it is – plus, even just following a link to GiveWell after watching a TED Talk requires that someone trusts us with their time), we need to signal clearly that we don’t make such decisions lightly. It is honest signaling too, since the whole point of EA is to put a whole lot more effort into the decision than usual. Many people I talk to are so “conscientious” about such decisions that they shy away from them completely (implicitly making very bad decisions). It’s probably impossible to show just how much effort and diligence has gone into such a difficult decision in just a short talk, so I rather focus on cases where I am, or each listener is, the one at whose detriment we make the prioritization decision, just like in the Child in the Pond case. Few people would no-platform me because they think it’s evil of me to ruin my own suit.
Sacrificing oneself, or rather some trivial luxury of oneself, also avoids the common objection why a discriminated against minority should have to pay when there are [insert all the commonly cited bad things like tax cuts for the most wealthy, military spending, inefficient health system, etc.]. It streamlines the communication a lot more.
The group at whose detriment we need to decide should never be a known, discriminated against minority in such examples, because these people are used to being discriminated against and their allies are used to seeing them being discriminated against, so when someone seems to be saying that they shouldn’t receive some form of assistance, they have just a huge prior for assuming that it’s just another discriminatory attack. I think their heuristic more or less fails in this case, but that is not to say that it’s not a very valid heuristic. I’ve been abroad in a country where pedestrian crosswalks are generally ignored by car drivers. I’m not going to just blinding walk onto the street there even if the driver of the only car coming toward me is actually one who would’ve stopped for me if I did. My heuristic fails in that case, but it generally keeps me safe.
Discriminated minority groups are super few, especially the ones the audience will be aware of. Some people may be able to come up with a dozen or so, some with several dozens. But in my actual prioritization decisions for the Your Siblings charity, I had to decide between groups of so fuzzy reference classes that there must be basically arbitrarily many of such groups. Street children vs. people at risk of malaria vs. farmed animals? Or street children in Kampala vs. people at risk of malaria in the southern DRC vs. chickens farmed for eggs in Spain? Or street children of the lost generation in the suburb’s of Kampala who were abducted for child sacrifice but freed by the police and delivered to the orphanage we’re cooperating with vs. …. You get the idea. If we’re unbiased, then what are the odds that we’ll draw a discriminated against group from the countless potential examples in this urn? This should heavily update a listener toward thinking that there’s some bias against the minority group at work here. Surely, the real explanation is something about salience on our minds or ease of communication and not about discrimination, but they’d have to know us very well to have so much trust in our intentions.
People with disability probably have distance “bias” at the same rates as anyone else, so they’ll perceive the blind person with the guide dog as in-group, the blind people suffering from cataracts in developing countries as completely neutral foreign group, and us as attacking them, making us the out-group. Such controversy is completely avoidable and highly dangerous, as Owen Cotton-Barratt describes in more detail in his paper on movement growth. Controversy breeds an opposition (and one that is not willing to engage in moral trade with us) that destroys option value particularly by depriving us of the highly promising option to draw on the democratic process to push for the most uncontroversial implications of effective altruism that we can find. Scott Alexander has written about it under the title “Toxoplasma of Rage.” I don’t think publicity is worth sacrificing the political power of EA for it, but that is just a great simplification of Owen Cotton-Barratt’s differentiated points on the topic.
Communication is by necessity cooperative. If we say something, however true it may be, and important members of the audience understand it as something false or something else entirely (that may not have propositional nature), then we failed to communicate. When this happens, we can’t just stamp our collective foot on the ground and be like, “But it’s true! Look at the numbers!” or “It’s your fault you didn’t understand me because you don’t know where I’m coming from!” That’s not the point of communication. We need to adapt our messaging or make sure that people at least don’t misunderstand us in dangerous ways.
(I feel like you may disagree on some of these points for similar reasons that The Point of View of the Universe seemed to me to argue for a non-naturalist type of moral realism while I “only” try to assume some form of non-cognitivist moral antirealism, maybe emotivism, which seems more parsimonious to me. Maybe you feel like or have good reasons to think that there is a true language (albeit in a non-naturalist sense) so that it makes sense to say “Yes, you misunderstood me, but what I said is true, because …,” while I’m unsure. I might say, “Yes, you misunderstood me, but what I meant was something you’d probably agree with. Let me try again.”)
Blind people are not a discriminated group, at least not in the first world. The extreme poor, on the other hand, often face severe discrimination—they are mistreated and have their rights violated by those with power, especially if they are Indians of low caste.
Comparative intervention effectiveness is a pillar of EA, distinct from personal sacrifice, so they are not interchangeable. I reject that there is some sort of prejudice for choosing to help one group over another, whether the groups are defined by physical condition, location, etc. One always has to choose. No one can help every group. Taking the example of preventing blindness vs assisting the blind, clearly the former is the wildly superior intervention for blindness so it is absurd to call it prejudiced against the blind.
Thanks! In response to which point is that? I think points 5 and 6 should answer your objection, but tell me if they don’t. Truth is not at issue here (if we ignore the parenthetical at the very end that isn’t mean to be part of my argument). I’d even say that Peter Singer deals in concepts of unusual importance and predictive power. But I think it’s important to make sure that we’re not being misunderstood in dangerous ways by valuable potential allies.
The objection about it being ableist to promote funding for trachoma surgeries rather than guide dogs doesn’t have to do with how many QALYs we’d save from providing someone with a guide dog or a trachoma surgery. Roughly, this objection is about how much respect we’re showing to disabled people. I’m not sure how many of the people who have said that this example is ableist are utilitarians, but we can actually make a good case that using the example causes negative consequences for the reason that it’s ableist. (It’s also possible that using the example as it’s typically used causes negative consequences by affecting how intellectually rigorous EA is, but that’s another topic). A few different points that might be used to support this argument would be:
On average, people get a lot of value out of having self-esteem; often, having more self-esteem on the margins enables them to do value-producing things they wouldn’t have done otherwise (flow-through effects!). Sometimes, it just makes them a bit happier (probably a much smaller effect in utilitarian terms).
Roughly, raising or lowering the group-wise esteem of a group has an effect on the self-esteem of some of the group’s members.
Keeping from lowering a group’s esteem isn’t very costly, if doing so involves nothing more than using a different tone. (There are of course situations where making a certain claim will raise or lower a group’s esteem a large amount if a certain tone is used, and a lesser amount if a different tone is used, even though the group’s esteem is nevertheless changed in the same direction in either case).
Decreases in a group’s ability to do value-producing things or be happy because their esteem has been lowered by someone acting in an ablelist manner, do not cause others to experience a similarly sized boost to their ability to be happy or do value-producing things. (I.e. the truth value of claims that “status games are zero sum” has little effect on the extent to which it’s true that decreasing a group’s esteem by e.g. ableist remarks has negative utilitarian consequences).
I’ve generally found it hard to make this sort of observation publicly in EA-inhabited spaces, since I typically get interpreted as primarily trying to say something political, rather than primarily trying to point out that certain actions have certain consequences. It’s legitimately hard to figure out what the ideal utilitarian combination of tone and example would be for this case, but it’s possible to iterate towards better combinations of the two as you have time to try different things according to your own best judgement, or just ask a critic what the most hurtful parts of an example are.
Peter, even if a trachoma operation cost the same as training a guide dog, and didn’t always prevent blindness, it would still be an excellent cost comparison because vision correction is vastly superior to having a dog.
And moreover it doesn’t just improve vision, it removes a source of intense pain.
If I try to steelman the argument, it comes out something like:
Some people, when they hear about the guide dog—tracheoma surgery contrast, will take the point to be that ameliorating a disability is intrinsically less valuable than preventing or curing an impairment. (In other words, that helping people live fulfilling lives while blind is necessarily a less worthy cause than “fixing” them.) Since this is not in fact the intended point, a comparison of more directly comparable interventions would be preferable, if available.
Why is the choice not directly comparable? If it were possible to offer a blind person a choice between being able to see, or having a guide dog, would it be so difficult for the blind person to choose?
Still, if you can suggest better comparisons that make the same point, I’ll be happy to use them.
Hi Peter,
Some examples that might be useful:
1) Differences in income
A US college graduate earns about 100x more than GiveDirectly recipients, suggesting money can go far further with GiveDirectly. (100x further if utility ~log-income.) https://80000hours.org/career-guide/anyone-make-a-difference/
2) The cost to save a life
GiveWell now says $7500 for a death prevented by malaria nets (plus many other benefits) Rich country governments, however, are often willing to pay over $1m to save a life of one of their citizens, a factor of 130+ difference. https://80000hours.org/career-guide/world-problems/#global-health-a-problem-where-you-could-really-make-progress
3) Cost per QALY
It still seems possible to save QALYs for a few hundred dollars in the developing world, whereas the UK’s NHS is willing to fund most things that save a QALY for under £20,000, and some that are over £30,000, which is again a factor of 100 difference.
So I still think a factor of 100x difference is defensible, though if you also take into account Brian’s point below, then it might be reduced to, say, a factor of 30, though that’s basically just a guess, and it could go the other way too. More on this: http://reflectivedisequilibrium.blogspot.com/2014/01/what-portion-of-boost-to-global-gdp.html
I like these examples but they do have some limitations.
I’m still searching for some better examples that are empirically robust as well as intuitively powerful.
(I’m looking for the strongest references to give to my claim here that “there is a strong case that most donations go to charities that improve well-being far less per-dollar than others.” (of course I’m willing to admit there’s some possibility that we don’t have strong evidence for this)
1) Differences in income: This will not be terribly convincing to anyone who doesn’t already accept the idea of vastly diminishing marginal utility, and there is the standard (inadequate but hard to easily rebut) objection that “things are much cheaper in developing countries”.
2) The cost to save a life: Yes, rich country governments factor this into their calculations, but is this indeed the calculation that is relevant when considering “typical charities operating in rich countries?” It also does not identify a particular intervention that is “much less efficient”.
3) Cost per QALY/ UK NHS: Similar limitations as in case 2.
What is the strongest statistic or comparison for making this point? Perhaps Sanjay Joshi of SoGive has some suggestions?
Perhaps making a comparison based on the tables near the end of Jamison, D. T. et al (2006). Disease control priorities in developing countries? 2006 was a long time ago, however.
And those flow-through effects may often be bigger per person when helping people in richer countries, since people in richer countries tend to have more impact on economic growth, technological and memetic developments, etc. than those in poorer countries. (Of course, whether these flow-through effects are net good or net bad is not clear.)
On the ableism point, my best guess is that the right response is to figure out the substance of the criticism. If we disagree, we should admit that openly, and forgo the support of people who do not in fact agree with us. If we agree, then we should account for the criticism and adjust both our beliefs and statements. Directly optimizing on avoiding adverse perceptions seems like it would lead to a distorted picture of what we are about.
A reading I found laid things out clearly: Utilitarians and disability activists: what are the genuine disagreements?
The article Vollmer cites says:
In this case that seems to be the substance of the criticism. You can’t anticipate every counter-argument one could make when talking to bigger audiences, but this one is pretty common. It might be necessary to say
Not sure it would help, it could be that such arguments trigger bad emotions for other reasons and the counter-arguments we hear are just rationalizations of those emotions. It does feel like a minefield.
Therefore, when comparing any 2 charities while introducing someone (especially an audience) to EA, we must phrase it carefully and sensitively. BTW, I think there is something to learn from way Singer phrased it in the TED talk: