I’ve often heard, as a critique of Effective Altruism, the argument that certain goods—aesthetic, familial, cultural, communal, etc.—can’t, or shouldn’t, be quantified, and that the implicit utilitarianism of EA forces us to do just that. A friend once told me that “it’s like forcing someone to pick a favorite child;” because even if (in your heart of hearts) you could pick a favorite child, that action in and of itself seems to not only create unnecessary psychological turmoil (aka negative utility), but also seems like something most people can’t find an answer for through bare-bones empiricism. For example, does it really seem feasible to propose that we can even somewhat accurately determine the value of a Van Gogh painting, or the heartbreak of having a family member pass away?
I’ve heard many arguments made by prominent EAs (in prominent Singerian fashion) arguing that even though it may seem difficult, or even cruel, we have a utilitarian responsibility to assign these harsh valuations for the sake of accurately defining our utilitarian priorities. They might say that even though you may not be able to put a specific utility value on Van Gogh’s Starry Night, the risk of failing to do so is that your $5,000 donation to the Museum of Modern Art could have had the ability to improve innumerable human, animal, or future lives. In short, they argue that if you don’t assign these goods any sort of reasonable valuation, there will be no empirical backing to support the need to maximize the utility value of your donation (given that relative utility, by definition, entails having to compare values.) If we decide not to give these goods any sort of utilitarian valuation, how can we decide where our resources will do more good?
I am not making an argument against utilitarianism. What I will posit is a way to make an argument for EA while sidestepping the trepidations many people have of assigning values to seemingly in-evaluable goods. Instead of arguing through the standard utilitarian template, I propose that the argument should be made through the position of fairness and expected equal potential.
By expected equal potential I mean that it is expected that everyone has the capability to appreciate things such as aesthetics, family, culture, community etc.; and I believe that most people will also accept this premise. Therefore, even if you think that you can’t put a value on a Van Gogh painting, it seems probable to assume that most people have the same ability—if given the right opportunities—to also get in-evaluable value from this painting. And the same goes for interpersonal human goods; it would be unusual to hear someone say that not everyone can love their family or culture. Therefore, through this assumption of expected equal potential—i.e. that all people, at minimum, have the capability to appreciate these in-evaluable goods—we can make the further argument that it is normatively fair to try to maximize the amount of people with the opportunity to appreciate these goods. For example, rates of premature death, infant mortality, and chronic illness are all symptoms correlated with extreme poverty. So even if you don’t believe that we can quantify the value of family, it still appears to be fair to posit that we should try to provide as many people as we can with the ability to enjoy their families (which then also makes this argument compatible with utilitarianism [with the utility function being the maximization of opportunity]).
Therefore, I see this argument as valid within the parameters of both utilitarianism and opposing ethical systems of justice and fairness—namely, certain branches of deontology. I hope this article will to show that when someone says that they aren’t a fan of Effective Altruism because they cherish goods that they can’t, or that they don’t want to, assign a set-value to, there is still a way to convince them of EA while accepting this premise, through the argument of fairness and expected equal potential.
I have made this argument to 5+ people and have, so far, had success each time. However, I am interested to hear if other members of the community have approached this criticism of EA in different ways, or if they disagree with my approach.
Pitching EA to someone who believes certain goods can’t be assigned a value
I’ve often heard, as a critique of Effective Altruism, the argument that certain goods—aesthetic, familial, cultural, communal, etc.—can’t, or shouldn’t, be quantified, and that the implicit utilitarianism of EA forces us to do just that. A friend once told me that “it’s like forcing someone to pick a favorite child;” because even if (in your heart of hearts) you could pick a favorite child, that action in and of itself seems to not only create unnecessary psychological turmoil (aka negative utility), but also seems like something most people can’t find an answer for through bare-bones empiricism. For example, does it really seem feasible to propose that we can even somewhat accurately determine the value of a Van Gogh painting, or the heartbreak of having a family member pass away?
I’ve heard many arguments made by prominent EAs (in prominent Singerian fashion) arguing that even though it may seem difficult, or even cruel, we have a utilitarian responsibility to assign these harsh valuations for the sake of accurately defining our utilitarian priorities. They might say that even though you may not be able to put a specific utility value on Van Gogh’s Starry Night, the risk of failing to do so is that your $5,000 donation to the Museum of Modern Art could have had the ability to improve innumerable human, animal, or future lives. In short, they argue that if you don’t assign these goods any sort of reasonable valuation, there will be no empirical backing to support the need to maximize the utility value of your donation (given that relative utility, by definition, entails having to compare values.) If we decide not to give these goods any sort of utilitarian valuation, how can we decide where our resources will do more good?
I am not making an argument against utilitarianism. What I will posit is a way to make an argument for EA while sidestepping the trepidations many people have of assigning values to seemingly in-evaluable goods. Instead of arguing through the standard utilitarian template, I propose that the argument should be made through the position of fairness and expected equal potential.
By expected equal potential I mean that it is expected that everyone has the capability to appreciate things such as aesthetics, family, culture, community etc.; and I believe that most people will also accept this premise. Therefore, even if you think that you can’t put a value on a Van Gogh painting, it seems probable to assume that most people have the same ability—if given the right opportunities—to also get in-evaluable value from this painting. And the same goes for interpersonal human goods; it would be unusual to hear someone say that not everyone can love their family or culture. Therefore, through this assumption of expected equal potential—i.e. that all people, at minimum, have the capability to appreciate these in-evaluable goods—we can make the further argument that it is normatively fair to try to maximize the amount of people with the opportunity to appreciate these goods. For example, rates of premature death, infant mortality, and chronic illness are all symptoms correlated with extreme poverty. So even if you don’t believe that we can quantify the value of family, it still appears to be fair to posit that we should try to provide as many people as we can with the ability to enjoy their families (which then also makes this argument compatible with utilitarianism [with the utility function being the maximization of opportunity]).
Therefore, I see this argument as valid within the parameters of both utilitarianism and opposing ethical systems of justice and fairness—namely, certain branches of deontology. I hope this article will to show that when someone says that they aren’t a fan of Effective Altruism because they cherish goods that they can’t, or that they don’t want to, assign a set-value to, there is still a way to convince them of EA while accepting this premise, through the argument of fairness and expected equal potential.
I have made this argument to 5+ people and have, so far, had success each time. However, I am interested to hear if other members of the community have approached this criticism of EA in different ways, or if they disagree with my approach.