I agree the EA movement is not about minimizing suffering, and its existence does not minimize suffering.
I don’t even agree it’s about maximizing happiness or well-being, as some pretend, and its existence does not maximize those things.
In fact, it has no coherent goals at all, because people can’t agree on any, let alone formalize them. That’s why you see all this fluff, like “flourishing” or the “future of humanity”. Or, as OP babbles, “positively impact the world”. Completely meaningless phrases to avoid the elephant in the room.
The EA movement, like humanity itself, does not have coherent goals and I’m pretty convinced it’s going to cause much more suffering than it prevents. It may cause some additional wellbeing or pleasure, almost by accident, but not in any efficient or optimized way—that’s just not what it does sociologically and psychologically. But hey, movement growth! It’s the equivalent to humanity’s GDP growth or “progress”: meaningless metrics that take on their own status as quasi-religious goals to pledge allegiance to.
It’s all very toxic and pretty useless, which is why Isupport neither the EA movement nor “humanity” itself and would never consider it actual altruism.
I am tired of “reducing suffering.”
Yeah, I bet you have a long marathon of pain prevention behind you, hahaha.
I agree that EA as a whole doesn’t have coherent goals (I think many EAs already acknowledge that it’s a shared set of tools rather than a shared set of values). But why are you so sure that “it’s going to cause much more suffering than it prevents”?
That was in reference to both humanity and the EA movement, but it’s trivially true for the EA movement itself.
Assuming they have any kind of directed impact whatsoever, most of them want to reduce extinction risk to get humanity to the stars.
We all know what that means for the total amount of future suffering. And yes, there will be some additional “flourishing” or pleasure/happiness/wellbeing, but it will not be optimized. It will not outweigh all the torture-level suffering.
People like Toby Ord may use happiness as a rationalization to cause more suffering, but most of them never actually endorse optimizing it. People in EA generally gain status by decrying the technically optimal solutions to this particular optimization problem. There are exceptions of course, like Michael Dickens above. But I’m not even convinced they’re doing their own values a favor by endorsing the EA movement at this point.
What do you think of the effort to end factory farming? Or Tomasik et al’s work on wild animal suffering? Do you think these increase rather than decrease suffering?
I don’t know how much they actually change. Everybody kind of agrees factory farms are evil but the behavior doesn’t really seem to change much from that. Not sure the EA movement made much of a difference in this regard.
As for wild animal suffering, there are ~5-10 people on the planet who care. The rest either doesn’t care or cares about the opposite, conserving and/or expanding natural suffering. I am not aware of anything that could reasonably change that.
Reducing “existential risk” will of course increase wild animal suffering as well as factory farming, and future equivalents.
You don’t think directing thousands of dollars to effective animal charities has made any difference? Or spreading effectiveness-based thinking in the animal rights community (e.g. the importance of focusing on farm animals rather than, say, shelter animals)? Or promoting cellular agriculture and plant-based meats?
As for wild animal suffering: there are a few more than 5-10 people who care (the Reducing WAS FB group has 1813 members), but yes, the community is tiny. Why does that mean thinking about how to reduce WAS accomplishes nothing? Don’t you think it’s worth at least trying to see if there are tractable ways to help wild animals—if only through interventions like lawn-paving and humane insecticides?
May I ask which efforts to reduce suffering you do think are worthwhile?
I actually agree with almost all your points. I agree with Jesse though. GiveWell is an integral part of EA, and GiveWell alone has been hugely beneficial simply because of how it redirected donations. Don’t you think?
I agree the EA movement is not about minimizing suffering, and its existence does not minimize suffering.
I don’t even agree it’s about maximizing happiness or well-being, as some pretend, and its existence does not maximize those things.
In fact, it has no coherent goals at all, because people can’t agree on any, let alone formalize them. That’s why you see all this fluff, like “flourishing” or the “future of humanity”. Or, as OP babbles, “positively impact the world”. Completely meaningless phrases to avoid the elephant in the room.
The EA movement, like humanity itself, does not have coherent goals and I’m pretty convinced it’s going to cause much more suffering than it prevents. It may cause some additional wellbeing or pleasure, almost by accident, but not in any efficient or optimized way—that’s just not what it does sociologically and psychologically. But hey, movement growth! It’s the equivalent to humanity’s GDP growth or “progress”: meaningless metrics that take on their own status as quasi-religious goals to pledge allegiance to.
It’s all very toxic and pretty useless, which is why Isupport neither the EA movement nor “humanity” itself and would never consider it actual altruism.
Yeah, I bet you have a long marathon of pain prevention behind you, hahaha.
I agree that EA as a whole doesn’t have coherent goals (I think many EAs already acknowledge that it’s a shared set of tools rather than a shared set of values). But why are you so sure that “it’s going to cause much more suffering than it prevents”?
That was in reference to both humanity and the EA movement, but it’s trivially true for the EA movement itself.
Assuming they have any kind of directed impact whatsoever, most of them want to reduce extinction risk to get humanity to the stars.
We all know what that means for the total amount of future suffering. And yes, there will be some additional “flourishing” or pleasure/happiness/wellbeing, but it will not be optimized. It will not outweigh all the torture-level suffering.
People like Toby Ord may use happiness as a rationalization to cause more suffering, but most of them never actually endorse optimizing it. People in EA generally gain status by decrying the technically optimal solutions to this particular optimization problem. There are exceptions of course, like Michael Dickens above. But I’m not even convinced they’re doing their own values a favor by endorsing the EA movement at this point.
What do you think of the effort to end factory farming? Or Tomasik et al’s work on wild animal suffering? Do you think these increase rather than decrease suffering?
I don’t know how much they actually change. Everybody kind of agrees factory farms are evil but the behavior doesn’t really seem to change much from that. Not sure the EA movement made much of a difference in this regard.
As for wild animal suffering, there are ~5-10 people on the planet who care. The rest either doesn’t care or cares about the opposite, conserving and/or expanding natural suffering. I am not aware of anything that could reasonably change that.
Reducing “existential risk” will of course increase wild animal suffering as well as factory farming, and future equivalents.
You don’t think directing thousands of dollars to effective animal charities has made any difference? Or spreading effectiveness-based thinking in the animal rights community (e.g. the importance of focusing on farm animals rather than, say, shelter animals)? Or promoting cellular agriculture and plant-based meats?
As for wild animal suffering: there are a few more than 5-10 people who care (the Reducing WAS FB group has 1813 members), but yes, the community is tiny. Why does that mean thinking about how to reduce WAS accomplishes nothing? Don’t you think it’s worth at least trying to see if there are tractable ways to help wild animals—if only through interventions like lawn-paving and humane insecticides?
May I ask which efforts to reduce suffering you do think are worthwhile?
“Reducing “existential risk” will of course increase wild animal suffering as well as factory farming, and future equivalents.”
Yes, this isn’t a novel claim. This is why people who care a lot about wild animal suffering are less likely to work on reducing x risk.
I actually agree with almost all your points. I agree with Jesse though. GiveWell is an integral part of EA, and GiveWell alone has been hugely beneficial simply because of how it redirected donations. Don’t you think?