I feel like that guy’s got a LOT of chutzpah to not-quite-say-outright-but-very-strongly-suggest that the Effective Altruism movement is a group of people who don’t care about the Global South. :-P
More seriously, I think we’re in a funny situation where maybe there are these tradeoffs in the abstract, but they don’t seem to come up in practice.
Like in the abstract, the very best longtermist intervention could be terrible for people today. But in practice, I would argue that most if not all current longtermist cause areas (pandemic prevention, AI risk, preventing nuclear war, etc.) are plausibly a very good use of philanthropic effort even if you only care about people alive today (including children).
Or, in the abstract, AI risk and malaria are competing for philanthropic funds. But in practice, a lot of the same people seem to care about both, including many of the people that the article (selectively) quotes. …And meanwhile most people in the world care about neither.
I mean, there could still be an interesting article about how there are these theoretical tradeoffs between present and future generations. But it’s misleading to name names and suggest that those people would gleefully make those tradeoffs, even if it involves torturing people alive today or whatever. Unless, of course, there’s actual evidence that they would do that. (The other strong possibility is, if actually faced with those tradeoffs in real life, they would say, “Uh, well, I guess that’s my stop, this is where I jump off the longtermist train!!”).
Anyway, I found the article extremely misleading and annoying. For example, the author led off with a quote where Jaan Tallinn says directly that climate change might be an existential risk (via a runaway scenario), and then two paragraphs later the author is asking “why does Tallinn think that climate change isn’t an existential risk?” Huh?? The article could have equally well said that Jaan Tallinn believes that climate change is “very plausibly an existential risk”, and Jaan Tallinn is the co-founder of an organization that does climate change outreach among other things, and while climate change isn’t a principal focus of current longtermist philanthropy, well, it’s not like climate change is a principal focus of current cancer research philanthropy either! And anyway it does come up to a reasonable extent, with healthy discussions focusing in particular on whether there are especially tractable and neglected things to do.
So anyway, I found the article very misleading.
(I agree with Rohin that if people are being intimidated, silenced, or cancelled, then that would be a very bad thing.)
Speaking of chutzpah, I’ve never seen anything quite like this:
“We can’t have people posting anything that suggests that Giving What We Can [an organization founded by Ord] is bad,” as Jenkins recalls. These are just a few of several dozen stories that people have shared with me after I went public with some of my own unnerving experiences.
He needs to briefly explain what the acronym ‘GWWC’ is—because otherwise the sentence will be incomprehensible—but because he wants to paint people as evil genocidal racists who don’t care about the poor, he can’t explain what type of organization GWWC is, or what the pledge is.
I think you raise a key point about theory of change and observed practice.
I think we’re in a funny situation where maybe there are these tradeoffs in the abstract, but they don’t seem to come up in practice.
This “funny situation” means that something is up with the theoretical model. If the tradeoffs do exist in the theoretical model but don’t seem to in practice then:
Practice is not actually based on the explicit theory but is instead based on something else, or
the tradeoffs do in fact exist in practice but are not noticed or acknowledged.
Both of these would be foundational problems for a movement organized around rationality and evidence based practice.
Hmm, I guess I wasn’t being very careful. Insofar as “helping future humans” is a different thing than “helping living humans”, it means that we could be in a situation where the interventions that are optimal for the former are very-sub-optimal (or even negative-value) for the latter. But it doesn’t mean we must be in that situation, and in fact I think we’re not.
I guess if you think: (1) finding good longtermist interventions is generally hard because predicting the far-future is hard, but (2) “preventing extinction (or AI s-risks) in the next 50 years” is an exception to that rule; (3) that category happens to be very beneficial for people alive today too; (4) it’s not like we’ve exhausted every intervention in that category and we’re scraping the bottom of the barrel for other things … If you believe all those things, then in that case, it’s not really surprising if we’re in a situation where the tradeoffs are weak-to-nonexistent. Maybe I’m oversimplifying, but something like that I guess?
I suspect that if someone had an idea about an intervention that they thought was super great and cost effective for future generations and awful for people alive today, well they would probably post that idea on EA Forum just like anything else, and then people would have a lively debate about it. I mean, maybe there are such things...Just nothing springs to my mind.
To me, the question is “what are the logical conclusions that longtermism leads to?” The idea that as of today we have not exhausted every intervention available is less relevant in considerations of 100s of thousand and millions of years.
I suspect that if someone had an idea about an intervention that they thought was super great and cost effective for future generations and awful for people alive today, well they would probably post that idea on EA Forum just like anything else, and then people would have a lively debate about it.
I agree. The debate would be whether to follow the moral reasoning of longtermism or not. Something that might be “awful for people alive today” is completely in line with longtermism—it could be the situation. To not support the intervention would constitute a break between theory and practice.
I think it is important to address the implications of this funny situation sooner rather than later.
I feel like that guy’s got a LOT of chutzpah to not-quite-say-outright-but-very-strongly-suggest that the Effective Altruism movement is a group of people who don’t care about the Global South. :-P
More seriously, I think we’re in a funny situation where maybe there are these tradeoffs in the abstract, but they don’t seem to come up in practice.
Like in the abstract, the very best longtermist intervention could be terrible for people today. But in practice, I would argue that most if not all current longtermist cause areas (pandemic prevention, AI risk, preventing nuclear war, etc.) are plausibly a very good use of philanthropic effort even if you only care about people alive today (including children).
Or, in the abstract, AI risk and malaria are competing for philanthropic funds. But in practice, a lot of the same people seem to care about both, including many of the people that the article (selectively) quotes. …And meanwhile most people in the world care about neither.
I mean, there could still be an interesting article about how there are these theoretical tradeoffs between present and future generations. But it’s misleading to name names and suggest that those people would gleefully make those tradeoffs, even if it involves torturing people alive today or whatever. Unless, of course, there’s actual evidence that they would do that. (The other strong possibility is, if actually faced with those tradeoffs in real life, they would say, “Uh, well, I guess that’s my stop, this is where I jump off the longtermist train!!”).
Anyway, I found the article extremely misleading and annoying. For example, the author led off with a quote where Jaan Tallinn says directly that climate change might be an existential risk (via a runaway scenario), and then two paragraphs later the author is asking “why does Tallinn think that climate change isn’t an existential risk?” Huh?? The article could have equally well said that Jaan Tallinn believes that climate change is “very plausibly an existential risk”, and Jaan Tallinn is the co-founder of an organization that does climate change outreach among other things, and while climate change isn’t a principal focus of current longtermist philanthropy, well, it’s not like climate change is a principal focus of current cancer research philanthropy either! And anyway it does come up to a reasonable extent, with healthy discussions focusing in particular on whether there are especially tractable and neglected things to do.
So anyway, I found the article very misleading.
(I agree with Rohin that if people are being intimidated, silenced, or cancelled, then that would be a very bad thing.)
Speaking of chutzpah, I’ve never seen anything quite like this:
He needs to briefly explain what the acronym ‘GWWC’ is—because otherwise the sentence will be incomprehensible—but because he wants to paint people as evil genocidal racists who don’t care about the poor, he can’t explain what type of organization GWWC is, or what the pledge is.
I think you raise a key point about theory of change and observed practice.
This “funny situation” means that something is up with the theoretical model. If the tradeoffs do exist in the theoretical model but don’t seem to in practice then:
Practice is not actually based on the explicit theory but is instead based on something else, or
the tradeoffs do in fact exist in practice but are not noticed or acknowledged.
Both of these would be foundational problems for a movement organized around rationality and evidence based practice.
Hmm, I guess I wasn’t being very careful. Insofar as “helping future humans” is a different thing than “helping living humans”, it means that we could be in a situation where the interventions that are optimal for the former are very-sub-optimal (or even negative-value) for the latter. But it doesn’t mean we must be in that situation, and in fact I think we’re not.
I guess if you think: (1) finding good longtermist interventions is generally hard because predicting the far-future is hard, but (2) “preventing extinction (or AI s-risks) in the next 50 years” is an exception to that rule; (3) that category happens to be very beneficial for people alive today too; (4) it’s not like we’ve exhausted every intervention in that category and we’re scraping the bottom of the barrel for other things … If you believe all those things, then in that case, it’s not really surprising if we’re in a situation where the tradeoffs are weak-to-nonexistent. Maybe I’m oversimplifying, but something like that I guess?
I suspect that if someone had an idea about an intervention that they thought was super great and cost effective for future generations and awful for people alive today, well they would probably post that idea on EA Forum just like anything else, and then people would have a lively debate about it. I mean, maybe there are such things...Just nothing springs to my mind.
To me, the question is “what are the logical conclusions that longtermism leads to?” The idea that as of today we have not exhausted every intervention available is less relevant in considerations of 100s of thousand and millions of years.
I agree. The debate would be whether to follow the moral reasoning of longtermism or not. Something that might be “awful for people alive today” is completely in line with longtermism—it could be the situation. To not support the intervention would constitute a break between theory and practice.
I think it is important to address the implications of this funny situation sooner rather than later.