“The USSR used to plead the future when doing something nasty in the present...”
Here longtermists can respond that for the work they currently prioritize Pascal’s mugging is not occurring, as the probability of existential risk is nontrivial.
I’d note that for Marx and Engels, communism was not merely a “nontrivial probability,” but a historical inevitability. “But these ends really do justify the means” doesn’t sound very reassuring.
More importantly, however, this quote is pointing out how common it is for political movements to use violence and oppression to seize and maintain power under altruistic pretexts. When the supposed ends are merely a fig leaf.
There are strains of seemingly violent thought heavily overrepresented within EA relative to their prevalence in other political or philosophical communities. Examples include the world destruction argument, as well as proposals to suppress and destroy certain strains of technological capabilities research, including via authoritarian surveillance-based political regimes.
How do we reassure people that EA’s not going to start sponsoring rain forest paving projects in the name of putting an end to wild animal suffering? Is there a way to do that without diminishing our culture of free and open debate about what the goals of our movement ought to be? I am not sure.
Hi—these are really, really great points[1], I think I mostly tried to sidestep these issues in my piece, mostly because I don’t know where I stand.
However, I will attempt to poke at the points you pose.
I’d note that for Marx and Engels, communism was not merely a “nontrivial probability,” but a historical inevitability. “But these ends really do justify the means” doesn’t sound very reassuring.
I agree with this: I misread the sentiments of the comments, and responded inaccurately. I will likely edit this later in the post (and maybe even write a future post about it). For the time being, I will not defend why EA is philosophically sound against these problems, but rather why it has some practically safeguards against it.
How do we reassure people that EA’s not going to start sponsoring rain forest paving projects in the name of putting an end to wild animal suffering?
I see where this concern comes from: namely, EA has a lot of money & power & dedicated manpower backing its specific interests. However I think we could reassure people with a few points that are true and will not ruin the community’s epistemic status:
Overrepresented != Majority: I wouldn’t say violent thought patterns and negative utilitarianism is a dominant ideology in the community.
Organizationally: Worldview diversity is a virtue of the EA community in contrast to many other similar orgs; decisions seem to be distributed across many individuals and organizations; finally, EAs are non-conflict-averse. These features avoid bad decision-making.
Specifically, worldview diversity is a ward against aggressively optimizing towards niche goals, which the ‘paving the rainforest’ problem is an example of.
Practically: As far as I know, the movement doesn’t seem to be explicitly pursuing these approaches at the moment (though I see why this wouldn’t be very reassuring).
Either way, thanks for your comment -you’ve given me a lot to think about!
While referencing the 7 Generations principle, I would credit it to “the Iroquois confederacy” or “the Haudenosaunee (Iroquois) confederacy” rather than “the Iroquois tribe”. There isn’t one tribe associated with that name; it’s an alliance formed by the Mohawk, Oneida, Onondaga, Cayuga and Seneca (and joined by the Tuscarora in 1722).
(Aside: In Ontario, where I’m from, we tend to use the word “nation” rather than “tribe” to refer to the members of the confederacy, but it’s possible this is a US/Canada difference, and the part that bothered me was the inaccuracy of the singular more than the specific word choice.)
Thanks for putting together the summary, I enjoyed reading it!
Thanks, this is interesting. You could consider running studies on Mechanical Turk to further test your hypotheses. That sample would be more random (though not fully random) than the sample of people who choose to comment on the interview. You could either ask them to give open-ended comments or, e.g. ask them whether they agree or disagree with a certain claim on a Likert Scale.
Minor: I would consider changing the title from “a Layperson’s Critiques” to “Laypeople’s Critiques” or something similar. I initially thought you had just interviewed one person, which would have been less interesting than what you’ve done.
Thanks, fixed! I will definitely consider doing follow up studies: I was somewhat surprised by my results and would love to see if they carry over to a more random sample.
Thanks for sharing this (though as a result I spent another hour falling down another longtermism criticism rabbit hole … ). It’s relieving (and saddening) to see that there is a certain degree of commonality with my themes as I honestly did not know how generalizable they would be: many of these comments carry such a similar tone & concern to the ones I found underneath the podcast.
“Plus its name is a fraud. As if it was actually about the long-term, they would’ve taken the effort to actually curb the climate crisises going on, in order to make sure we’re not underwater or dying of thirst.”
Yes it’s saddening. I also saw that the person being quoted (Paris Marx) interviewed Phil Torres, which I find concerning because I’m sure it’s full of misinformation. Not sure what we should do about that. Maybe not driving any attention is the best approach at this point
I’d note that for Marx and Engels, communism was not merely a “nontrivial probability,” but a historical inevitability. “But these ends really do justify the means” doesn’t sound very reassuring.
More importantly, however, this quote is pointing out how common it is for political movements to use violence and oppression to seize and maintain power under altruistic pretexts. When the supposed ends are merely a fig leaf.
There are strains of seemingly violent thought heavily overrepresented within EA relative to their prevalence in other political or philosophical communities. Examples include the world destruction argument, as well as proposals to suppress and destroy certain strains of technological capabilities research, including via authoritarian surveillance-based political regimes.
How do we reassure people that EA’s not going to start sponsoring rain forest paving projects in the name of putting an end to wild animal suffering? Is there a way to do that without diminishing our culture of free and open debate about what the goals of our movement ought to be? I am not sure.
Hi—these are really, really great points[1], I think I mostly tried to sidestep these issues in my piece, mostly because I don’t know where I stand.
However, I will attempt to poke at the points you pose.
I agree with this: I misread the sentiments of the comments, and responded inaccurately. I will likely edit this later in the post (and maybe even write a future post about it). For the time being, I will not defend why EA is philosophically sound against these problems, but rather why it has some practically safeguards against it.
I see where this concern comes from: namely, EA has a lot of money & power & dedicated manpower backing its specific interests. However I think we could reassure people with a few points that are true and will not ruin the community’s epistemic status:
Overrepresented != Majority: I wouldn’t say violent thought patterns and negative utilitarianism is a dominant ideology in the community.
Organizationally: Worldview diversity is a virtue of the EA community in contrast to many other similar orgs; decisions seem to be distributed across many individuals and organizations; finally, EAs are non-conflict-averse. These features avoid bad decision-making.
Specifically, worldview diversity is a ward against aggressively optimizing towards niche goals, which the ‘paving the rainforest’ problem is an example of.
Practically: As far as I know, the movement doesn’t seem to be explicitly pursuing these approaches at the moment (though I see why this wouldn’t be very reassuring).
Either way, thanks for your comment -you’ve given me a lot to think about!
Here’s another interesting criticism piece (https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo) that carries some similar anxieties, though I do feel it does misrepresent both EA and longtermism in its argumentation.
I like these responses, and I’d be comfortable with using them. Nice post, by the way!
While referencing the 7 Generations principle, I would credit it to “the Iroquois confederacy” or “the Haudenosaunee (Iroquois) confederacy” rather than “the Iroquois tribe”. There isn’t one tribe associated with that name; it’s an alliance formed by the Mohawk, Oneida, Onondaga, Cayuga and Seneca (and joined by the Tuscarora in 1722).
(Aside: In Ontario, where I’m from, we tend to use the word “nation” rather than “tribe” to refer to the members of the confederacy, but it’s possible this is a US/Canada difference, and the part that bothered me was the inaccuracy of the singular more than the specific word choice.)
Thanks for putting together the summary, I enjoyed reading it!
Fixed! Thank you for pointing this out!
Thanks, this is interesting. You could consider running studies on Mechanical Turk to further test your hypotheses. That sample would be more random (though not fully random) than the sample of people who choose to comment on the interview. You could either ask them to give open-ended comments or, e.g. ask them whether they agree or disagree with a certain claim on a Likert Scale.
Minor: I would consider changing the title from “a Layperson’s Critiques” to “Laypeople’s Critiques” or something similar. I initially thought you had just interviewed one person, which would have been less interesting than what you’ve done.
Thanks, fixed! I will definitely consider doing follow up studies: I was somewhat surprised by my results and would love to see if they carry over to a more random sample.
Thank you for doing this! I recently saw comments to another leftist intellectual about EA& What We Owe The Future (https://mobile.twitter.com/rcbregman/status/1558546645247311873) and was somewhat astonished by just HOW OFF-MARK the comments were.
Also interesting that one type of comment is “you can’t predict the future” while another is “I’m really certain [X] will happen in the future”
Thanks for sharing this (though as a result I spent another hour falling down another longtermism criticism rabbit hole … ). It’s relieving (and saddening) to see that there is a certain degree of commonality with my themes as I honestly did not know how generalizable they would be: many of these comments carry such a similar tone & concern to the ones I found underneath the podcast.
“Plus its name is a fraud. As if it was actually about the long-term, they would’ve taken the effort to actually curb the climate crisises going on, in order to make sure we’re not underwater or dying of thirst.”
Yes it’s saddening. I also saw that the person being quoted (Paris Marx) interviewed Phil Torres, which I find concerning because I’m sure it’s full of misinformation. Not sure what we should do about that. Maybe not driving any attention is the best approach at this point