Yes, precisely. Although—there are so many variants of negative utilitarianism that “precisely” is probably a misnomer.
Cornelius
Yea as a two-level consequentialist moral anti-realist I actually am pretty tired of EA’s insistence of “how many lives we can save” instead of emphasizing how much “life fulfillment and happiness” you can spread. I always thought this was not only a PR mistake but also a utilitarian mistake. We’re trying to prevent suffering, so obviously preventing instances where a single person goes through more suffering on the road to death is more morally relevant utils-wise than preventing a death with less suffering.
Nonetheless, this is the first I’ve heard that violence and exploitation are under-valued by EA’s. It always seemed the case to me that EAs generally weep and feel angsty feelings in their gut when they read about the violence and exploitation of their fellow man. But, what can we do? Regions of violence are notoriously difficult for setting up interventions that are tractable. As such it always seeemed to me that we should focus on what we know works since lifting people out of disease and poverty empowers them to address issues of violence and exploitation themselves. And giving someone their own agency back in this way is, in my view, something worth putting a lot of moral weight on due to its long-term (albeit hard-to measure) consequences.
And now I’m going to say something that I feel some people probably wont like.
I consistently feel that a lot of the critique on EA has to do with how others perceive EAs rather than what they are really like. i.e prejudice. I mentioned above that I generally feel EAs are legit moved to tears (or whatever is a significant feeling for them) regarding issues of violence. But, I find that as soon as this person spends most of his/her time in the public space talking about math and weird utilitarian expected value calculations this person is suddenly viewed as no longer having a heart or “the right heart.” The amount of compassion and empathy a person has is not tied to what weird mathematical arguments they push out but what they do and feel inside (this is how I operationalize “compassion” at any rate: an internal state leading to external consequences. Yes I know, that’s a pretty virtue ethics way to look at it, so sue me.).
Anyway, maybe part of this is because I know what it feels like to be the highschool nerd that secretly cries when he sees someone getting bullied at break time but who then talks to people about and cevelops exstensivly resaeched weird ideas like transhumanism as a means of optimizing the human flourishing (instead of say caring to go to an anti-bullying event that everyone instead thinks I should be going to if I really cared about bullying). It makes sense to me that many people think I have my priorities wrong. But it certainly isn’t due to a lack of compassion and concern for my fellow man. It’s not too hard to go from this analogy and argue that
This is perhaps what I absolutely love about the EA community. I’ve finally found a community of nerds where I can be myself and go in depth with uber-weird (any and all) ideas without being looked at as any less compassionate <3.
When people talk about ending violence and exploitation by doing something that will change the system that keeps these problems in place I get upset. This “system” is often invisible and amorphous and a product of ideology rather than say cost-effectiveness calculations. Why this gets me upset is that I often find this means people are willing to sacrifice giving someone their agency back when it is clear you can do so through donating to proven disease and poverty alleviation interventions to instead donate/support a cause against violence and exploitation because it aligns with their ideology. This essentially seems to me a way of making donation about yourself—trying to make sure you feel content in your own ethical worldview because specifically not doing anything about that violence and exploitation makes you feel bad—rather than making it about the individuals on the receiving end of the donation.
Yea I know, my past virtue ethics predilections are showing again. Even if someone like what I’ve described above supports an anti-violence cause that though difficult to get a effectiveness measure from is still nontheless doing a lot of good in the world we cant measure I still don’t like it. I’m caring what people think and arguing that certain self-serving thoughts appear morally problematic independent of the end-result they cause. So let me show I’m also strongly opposed to forms of anti-realist virtue ethics. It’s not enough to merely be aligned with the right way of thinking/ideology etc and then good things come from that. The end result: the actual people on the receiving end—are what actually matter. And this is why I find a “mostly” utilitarian perspective so much more humanizing than people a lot of people who get uncomfortable with its extreme conclusions and then reject the whole thing. A more utilitarian perspective forces you to make it about the receiver.
Whatever the case, writing this has made me sad. I’m sad to see you go, you seem highly intelligent and a likely asset to the movement, and as someone who is on the front-line of EA and PR I take this as a personal failure but wish you the best. Does anyone know of any EA-vetted charities working on violence and exploitation prevention? Even ones that are a stretch tractability-wise would be good. I’d like to donate—always makes me feel better.
I can also vouch for the success of “What’s one good thing and one bad thing that has happened to you this week/month/since last time?” Each person picks one of each and talks about it. Naturally, some people may bring up things related to EA very easily with this question if they are involved with it.
Couldn’t you just counter and say that if EA were around back then and it had just started out trying to figure out what the most good is that they would not support the abolitionist movement because of difficult EV calculations and because they are spending resources elsewhere? However, if the EA community existed back then and had matured a bit to the stage that something like OpenPhil existed back then as well (OpenPhil of course being an EA org for those reading who don’t know) then they would have very likely supported attempts at cost-effectiveness campaigns to support the abolitionist movement.
The EA community like all entities is an entity in flux. I don’t like hearing “If it existed back then then it wouldn’t support the abolishionist movement and therefore it has problems, and this may implicitly imply it is bad because it is thinking in a bad quantification bias naughty way.” This sounds like an unfair mischaracterization to me—especially given that you can just cherry-pick what the EA community was like at a particular time (how much it knows) and how many resources it has specifically so that it wouldn’t support the abolishionist movement and then claim the reason is quantification bias.
What’s better is “if EA existed back then as it existed in 2012/2050/20xy with x resources then it would not support the abolishionist movement” and now the factor of time and resources might very well be a much better explanation for why EA wouldn’t have supported the abolishionist movement, not quantification bias.
Consider the EA community of 2050 that would have decades worth of knowledge built on how to deal with harder to quantify causes.
I suspect that if the EA community of 2050 had the resources of YMCA or United Way and existed in the 18th Century, it would have supported the hell out of the abolitionist movement.