I disagree with the common framing that saving lives and so on constitute one straightforward, unambiguous way to do good, and that longtermism just constitutes or motivates some interventions with the potential to do even more good.
It seems to me (and I’m not alone, of course) that concern for the long term renders the sign of the value most of the classic EA interventions ambiguous. In any event, it renders the magnitude of their value more ambiguous than it is if one disregards flow-through effects of all kinds. If
accounting for long term consequences lowers the expected value (or whatever analog of expected value we use in the absence of precise expectations) of classic EA interventions, in someone’s mind, and
she’s not persuaded that any other interventions—or, any she can perform—offer as high (quasi-)expected value, all things considered, as the classic EA interventions offer after disregarding flow-through effects,
then I think it’s reasonable for her to feel less happy about how much good she can do as she becomes more concerned about the long term.
For the record, I don’t know how common this feeling is, or how often people feel more excited about their ability to save lives and so on than they did a few years ago. One could certainly think that saving lives, say, has even more long-term net positive effects than short-term positive effects. I just want to say that when someone says that they feel less excited about how much good they can do, and that longtermism has something to do with that, that could be justified. They might just be realizing that doing good isn’t and never was as good as they thought it was.
Yep, I agree that if i) you personally buy into the long-termist thesis, and ii) you expect the long-term effects of ordinary do gooding actions to be bigger than short-term effects, and iii) you expect these long-term effects to be negative, then it makes sense to be less enthusiastic about your ability to do good than before.
However, I doubt most people who feel like I described in the post fall into this category. As you said, you were uncertain about how common this feeling is.
Lots of people hear about the much bigger impact you can have by focussing on the far future. Significantly fewer are well versed in the specific details and practical implications of long-termism.
While I have heard about people believing ii) and iii), I haven’t seen either argument carefully written up anywhere. I’d assume this is true for lots of people. There has been a big push in the EA community to believe i), this has not been true for ii) and iii) as far as I can tell.
If I’m not misunderstanding you, being less enthusiastic than before just requires (i) (if by “the long-termist thesis” we mean the moral claim that we should care about the long term) and (iii). I don’t think that’s a lot of requirements. Plus, this is all in a framework of precise expectations; you could also just think that the long-term effects are ambiguous enough to render the expected value undefined, and endorse a decision theory which penalizes this sort of ambiguity.
My guess is that when people start thinking about longtermism and get less excited about ordinary do-gooding, this is often at least in part due either to a belief in (iii) or, more commonly, to the realization of the ambiguity, even when this isn’t articulated in detail. That seems likely to me (a) because, anecdotally, it seems relatively common for people raise concerns along these lines independently after thinking about this stuff for a while and (b) because there has been some push to believe in this ambiguity, namely all the writing on cluelessness. But of course that’s just a guess.
In principle you only need i) and iii), that’s true, but I think in practice ii) is usually also required. Humans are fairly scope insensitive, and I doubt we’d see low community morale from ordinary do gooding actions being less good by a factor of two or three. As an example, historically GiveWell estimates of how much saving a life with AMF costs have differed by about this much—and it didn’t seem to have much of an impact on community morale. Not so now.
Our crux seems to be that you assume cluelessness or ideas in the same space are a large factor in producing low community morale for doing good. I must admit that I was surprised by this response, I personally haven’t found these arguments to be particularly persuasive, and most people around me seem to feel similarly about such arguments, if they are familiar with them at all.
I don’t know if there islower community morale of the sort you describe—you’re better positioned to have a sense of that than I am—but to the extent that there is, yes, it seems we disagree about whether to suspect that cluelessness would be a significant factor.
It would be interesting to include a pair of questions on the next EA survey about whether people feel more or less charitably motivated than last year, and, if less, why.
I personally haven’t found these arguments to be particularly persuasive, and most people around me seem to feel similarly about such arguments, if they are familiar with them at all.
Have you written somewhere about why you don’t find cluelessness arguments to be particularly persuasive?
No, I haven’t. Given the amount of upvotes Phil’s comment received (from which I conclude a decent fraction of people do find arguments in this space demotivating which is important to know) I will probably read up on it again. But I very rarely write top-level posts and the probability of this investigation turning into one is negligible.
This was now quite a while ago but I have spent some time trying to figure out why I don’t find cluelessness arguments persuasive. After we spent a bunch of time deconfusing ourselves, Alex has written up almost everything I could say on the subject in a long comment chain here.
It’s worth reading the comment section in full. Turns out my position has been consistent for the past 4 years (though I should have remembered that thread!).
While I have heard about people believing ii) and iii), I haven’t seen either argument carefully written up anywhere. I’d assume this is true for lots of people.
Agreed—would love to see this written up by someone.
It seems to me (and I’m not alone, of course) that concern for the long term renders the sign of the value most of the classic EA interventions ambiguous.
I disagree with the common framing that saving lives and so on constitute one straightforward, unambiguous way to do good, and that longtermism just constitutes or motivates some interventions with the potential to do even more good.
It seems to me (and I’m not alone, of course) that concern for the long term renders the sign of the value most of the classic EA interventions ambiguous. In any event, it renders the magnitude of their value more ambiguous than it is if one disregards flow-through effects of all kinds. If
accounting for long term consequences lowers the expected value (or whatever analog of expected value we use in the absence of precise expectations) of classic EA interventions, in someone’s mind, and
she’s not persuaded that any other interventions—or, any she can perform—offer as high (quasi-)expected value, all things considered, as the classic EA interventions offer after disregarding flow-through effects,
then I think it’s reasonable for her to feel less happy about how much good she can do as she becomes more concerned about the long term.
For the record, I don’t know how common this feeling is, or how often people feel more excited about their ability to save lives and so on than they did a few years ago. One could certainly think that saving lives, say, has even more long-term net positive effects than short-term positive effects. I just want to say that when someone says that they feel less excited about how much good they can do, and that longtermism has something to do with that, that could be justified. They might just be realizing that doing good isn’t and never was as good as they thought it was.
Yep, I agree that if i) you personally buy into the long-termist thesis, and ii) you expect the long-term effects of ordinary do gooding actions to be bigger than short-term effects, and iii) you expect these long-term effects to be negative, then it makes sense to be less enthusiastic about your ability to do good than before.
However, I doubt most people who feel like I described in the post fall into this category. As you said, you were uncertain about how common this feeling is. Lots of people hear about the much bigger impact you can have by focussing on the far future. Significantly fewer are well versed in the specific details and practical implications of long-termism.
While I have heard about people believing ii) and iii), I haven’t seen either argument carefully written up anywhere. I’d assume this is true for lots of people. There has been a big push in the EA community to believe i), this has not been true for ii) and iii) as far as I can tell.
If I’m not misunderstanding you, being less enthusiastic than before just requires (i) (if by “the long-termist thesis” we mean the moral claim that we should care about the long term) and (iii). I don’t think that’s a lot of requirements. Plus, this is all in a framework of precise expectations; you could also just think that the long-term effects are ambiguous enough to render the expected value undefined, and endorse a decision theory which penalizes this sort of ambiguity.
My guess is that when people start thinking about longtermism and get less excited about ordinary do-gooding, this is often at least in part due either to a belief in (iii) or, more commonly, to the realization of the ambiguity, even when this isn’t articulated in detail. That seems likely to me (a) because, anecdotally, it seems relatively common for people raise concerns along these lines independently after thinking about this stuff for a while and (b) because there has been some push to believe in this ambiguity, namely all the writing on cluelessness. But of course that’s just a guess.
In principle you only need i) and iii), that’s true, but I think in practice ii) is usually also required. Humans are fairly scope insensitive, and I doubt we’d see low community morale from ordinary do gooding actions being less good by a factor of two or three. As an example, historically GiveWell estimates of how much saving a life with AMF costs have differed by about this much—and it didn’t seem to have much of an impact on community morale. Not so now.
Our crux seems to be that you assume cluelessness or ideas in the same space are a large factor in producing low community morale for doing good. I must admit that I was surprised by this response, I personally haven’t found these arguments to be particularly persuasive, and most people around me seem to feel similarly about such arguments, if they are familiar with them at all.
I don’t know if there is lower community morale of the sort you describe—you’re better positioned to have a sense of that than I am—but to the extent that there is, yes, it seems we disagree about whether to suspect that cluelessness would be a significant factor.
It would be interesting to include a pair of questions on the next EA survey about whether people feel more or less charitably motivated than last year, and, if less, why.
Have you written somewhere about why you don’t find cluelessness arguments to be particularly persuasive?
No, I haven’t. Given the amount of upvotes Phil’s comment received (from which I conclude a decent fraction of people do find arguments in this space demotivating which is important to know) I will probably read up on it again. But I very rarely write top-level posts and the probability of this investigation turning into one is negligible.
Got it.
Perhaps a few bullet points in a comment if there’s no space for a top-level post (better written quickly than not at all...)
Hi Milan,
This was now quite a while ago but I have spent some time trying to figure out why I don’t find cluelessness arguments persuasive. After we spent a bunch of time deconfusing ourselves, Alex has written up almost everything I could say on the subject in a long comment chain here.
Thanks… I replied on that thread.
Through thinking about these comments, I did remember an EA Forum thread in which ii) and iii) were argued about from 4 years ago: https://forum.effectivealtruism.org/posts/ajPY6zxSFr3BbMsb5/are-givewell-top-charities-too-speculative
It’s worth reading the comment section in full. Turns out my position has been consistent for the past 4 years (though I should have remembered that thread!).
Agreed—would love to see this written up by someone.
I expanded on this here: What consequences?