The main intervention they care about to reduce drifting towards non-altruism is increasing involvement with the Effective Altruism community. I’ve not seen any justification for focusing (exclusively) on that.
The first point I’d make here is that I’d guess people writing on these topics aren’t necessarily primarily focused on a single dimension from more to less altruism. Instead, I’d guess they see the package of values and ideas associated with EA as being particularly useful and particularly likely to increase how much positive impact someone has. To illustrate, I’d rather have someone donating 5% of their income to any of the EA Funds than donating 10% of their income to a guide dog charity, even if the latter person may be “more altruistic”.
I don’t think EA is the only package of values and ideas that can cause someone to be quite impactful, but it does seem an unusually good and reliable package.
Relatedly, I care a lot less about drifts away from altruism among people who aren’t likely to be very effectively altruistic anyway than among people who are.
So if I’m more concerned about drift away from an EA-ish package of values than just away from altruism, and if I’m more concerned about drift away from altruism among relatively effectiveness-minded people than less effectiveness-minded people, I think it makes sense to pay a lot of attention to levels of involvement in the EA community. (See also.) Of course, someone can keep the values and behaviours without being involved in the community, as you note, but being in the community likely helps a lot of people retain and effectively act on those values.
And then, as discussed in Todd’s post, we do have evidence that levels of engagement with the EA community reduces drop-out rates. (Which is not identical to value drift, and that’s worth noting, but it still seems important and correlated.)
Finally, I think most of the other 9 areas you mention seem like they already receive substantial non-EA attention and would be hard to cost-effectively influence. That said, this doesn’t mean EAs shouldn’t think about such things at all.
The post Reducing long-term risks from malevolent actors is arguably one example of EAs considering efforts that would have that sort of scope and difficulty and that would potentially, in effect, increase altruism (though that’s not the primary focus/framing). And I’m currently doing some related research myself. But it does seem like things in this area will be less tractable and neglected than many things EAs think about.
levels of engagement with the EA community reduces drop-out rates
“drop-out” meaning 0 engagement, right? so the claim has the form of “the more you do X, the less likely you are of stopping doing X completely”. it’s not clear to me to which extent it’s causal, but yeah, still seems useful info!
I think most of the other 9 areas you mention seem like they already receive substantial non-EA attention
oh, that’s plausible!
The post Reducing long-term risks from malevolent actors is arguably one example of EAs considering efforts that would have that sort of scope and difficulty and that would potentially, in effect, increase altruism
Good point! In my post, I was mostly thinking at the individual level. Looking at a population level and on a longer term horizon, I should probably add other possible interventions such as:
Incentives to have children (political, economical, social)
“drop-out” meaning 0 engagement, right? so the claim has the form of “the more you do X, the less likely you are of stopping doing X completely”. it’s not clear to me to which extent it’s causal, but yeah, still seems useful info!
I can see why “we do have evidence that levels of engagement with the EA community reduces drop-out rates” might sound like a somewhat empty/tautological sentence. (Then there’s also the question of causality, which I’ll get to at the the end.) But I think it’s meaningful when you consider Todd’s definitions (which I perhaps should’ve quoted before).
He defines the drop out rate as “the rate at which people both (i) stop engaging with the effective altruism community and (ii) stop working on paths commonly considered high-impact within the community” (emphasis added).
And I don’t think he precisely defines engagement, but he writes:
My guess is that the most significant factor [in drop-out rates] is someone’s degree of social integration—i.e. I expect that people with friends or colleagues who are into EA are less likely to drop out of the community.
Relatedly, I think the degree to which someone identifies with EA will be important. For instance, someone who has been featured in the media as being into EA seems much less likely to drop out. We could think of both of these as aspects of ‘engagement’.
So I think the claim is something like “more social integration into EA and identification as an EA at time 1 predicts a higher chance of staying engaged with EA and still pursuing paths commonly considered high-impact at time 2”.
(I’d encourage people to read Todd’s post for more details; these are just my quick comments.)
Then there is of course still the question of causality: Is this because engagement reduces drop out, or because some other factor (e.g., being the sort of person who EA really fits with) both increases engagement and reduces drop out? My guess is that both are true to a significant extent, but I’m not sure if we have any data on that.
The first point I’d make here is that I’d guess people writing on these topics aren’t necessarily primarily focused on a single dimension from more to less altruism. Instead, I’d guess they see the package of values and ideas associated with EA as being particularly useful and particularly likely to increase how much positive impact someone has. To illustrate, I’d rather have someone donating 5% of their income to any of the EA Funds than donating 10% of their income to a guide dog charity, even if the latter person may be “more altruistic”.
I don’t think EA is the only package of values and ideas that can cause someone to be quite impactful, but it does seem an unusually good and reliable package.
Relatedly, I care a lot less about drifts away from altruism among people who aren’t likely to be very effectively altruistic anyway than among people who are.
So if I’m more concerned about drift away from an EA-ish package of values than just away from altruism, and if I’m more concerned about drift away from altruism among relatively effectiveness-minded people than less effectiveness-minded people, I think it makes sense to pay a lot of attention to levels of involvement in the EA community. (See also.) Of course, someone can keep the values and behaviours without being involved in the community, as you note, but being in the community likely helps a lot of people retain and effectively act on those values.
And then, as discussed in Todd’s post, we do have evidence that levels of engagement with the EA community reduces drop-out rates. (Which is not identical to value drift, and that’s worth noting, but it still seems important and correlated.)
Finally, I think most of the other 9 areas you mention seem like they already receive substantial non-EA attention and would be hard to cost-effectively influence. That said, this doesn’t mean EAs shouldn’t think about such things at all.
The post Reducing long-term risks from malevolent actors is arguably one example of EAs considering efforts that would have that sort of scope and difficulty and that would potentially, in effect, increase altruism (though that’s not the primary focus/framing). And I’m currently doing some related research myself. But it does seem like things in this area will be less tractable and neglected than many things EAs think about.
Thanks!
I agree with your precisions.
“drop-out” meaning 0 engagement, right? so the claim has the form of “the more you do X, the less likely you are of stopping doing X completely”. it’s not clear to me to which extent it’s causal, but yeah, still seems useful info!
oh, that’s plausible!
Good point! In my post, I was mostly thinking at the individual level. Looking at a population level and on a longer term horizon, I should probably add other possible interventions such as:
Incentives to have children (political, economical, social)
Immigration policies
Economic system
Genetic engineering
Dating dynamics
Cultural evolution
I can see why “we do have evidence that levels of engagement with the EA community reduces drop-out rates” might sound like a somewhat empty/tautological sentence. (Then there’s also the question of causality, which I’ll get to at the the end.) But I think it’s meaningful when you consider Todd’s definitions (which I perhaps should’ve quoted before).
He defines the drop out rate as “the rate at which people both (i) stop engaging with the effective altruism community and (ii) stop working on paths commonly considered high-impact within the community” (emphasis added).
And I don’t think he precisely defines engagement, but he writes:
So I think the claim is something like “more social integration into EA and identification as an EA at time 1 predicts a higher chance of staying engaged with EA and still pursuing paths commonly considered high-impact at time 2”.
(I’d encourage people to read Todd’s post for more details; these are just my quick comments.)
Then there is of course still the question of causality: Is this because engagement reduces drop out, or because some other factor (e.g., being the sort of person who EA really fits with) both increases engagement and reduces drop out? My guess is that both are true to a significant extent, but I’m not sure if we have any data on that.
I see, thanks!