I really appreciated this post. I don’t agree with all of it, but I think that it’s an earnest exploration of some important and subtle boundaries.
The section of the post that I found most helpful was “EA ideology fosters unsafe judgment and intolerance”. Within that, the point that I found most striking was: that there’s a tension in how language gets used in ethical frameworks and in mental wellbeing frameworks, and people often aren’t well equipped with the tools to handle those tensions. This … basically just seems correct? And seems like a really good dynamic for people to be tracking.
Something which I kind of wish you’d explored a bit more is ways in which EA may be helpful for people’s mental health. You get at that a bit when talking about how/why it appeals to people, and seem to acknowledge that there are ways in which it can be healthy for people to engage, but I think that we’ll get faster to a better/deeper understanding of the dynamics if we try to look honestly at the ways in which it can be good for people as well as bad, as well as what levels of tradeoff in terms of potentially being bad for people are worth accepting (I think the correct answer will be “a little bit”, in that there’s no way to avoid all harms without just not being in the space at all, and I think that would be a clear mistake for EA; though I am also inclined to think that the correct answer is “somewhat less than at present”).
My mental health has greatly improved since joining EA and I think that’s because:
the culture encourages having an internal locus of control (or being agentic) which is associated with better mental health outcomes
it faced me with the reality that I’m incredibly privileged in global terms so I should be using that to help others rather than feeling sorry for myself
helping others is intrinsically satisfying
I do think there’s more that could be done to develop psychological safety and remind people that their intrinsic value is separate from their instrumental value to the EA movement.
This is why I do community building work.
Idk I’m not a maximiser but I do think it’s useful to have barriers to entry that require strong signals of shared values. I’m not interested in running a social club for privileged people that aren’t actually contributing money and/or labour into EA causes.
I think most EAs living in rich countries should by default be working normal jobs while donating 10% and contributing to EA projects on the side. That does help calibrate with the wider world.
Exploring what’s helpful is definitely an interesting angle that generates ideas. One idea that comes to mind is how EA communicates around the Top Charities Fund, basically “let us do the heavy lifting and we’ll do our best to figure out where your donations will have impact”. This has two particular attributes that I like. Firstly it provides maximum ease for a reader to just accept a TLDR and feel good about their choice (and this is generally positive for a non-EA donator independent of how good or bad TCF’s picks are). Secondly, I think the messaging is more neutral and a bit closer to invitational consent culture. Hardcore EA is more likely to imply that you “should” think and care about whether TCF is actually a good fund and decide for yourself, but the consent culture version might be psychologically beneficial to both EAs and non-EAs while achieving the same or better numeric outcomes.
I really appreciated this post. I don’t agree with all of it, but I think that it’s an earnest exploration of some important and subtle boundaries.
The section of the post that I found most helpful was “EA ideology fosters unsafe judgment and intolerance”. Within that, the point that I found most striking was: that there’s a tension in how language gets used in ethical frameworks and in mental wellbeing frameworks, and people often aren’t well equipped with the tools to handle those tensions. This … basically just seems correct? And seems like a really good dynamic for people to be tracking.
Something which I kind of wish you’d explored a bit more is ways in which EA may be helpful for people’s mental health. You get at that a bit when talking about how/why it appeals to people, and seem to acknowledge that there are ways in which it can be healthy for people to engage, but I think that we’ll get faster to a better/deeper understanding of the dynamics if we try to look honestly at the ways in which it can be good for people as well as bad, as well as what levels of tradeoff in terms of potentially being bad for people are worth accepting (I think the correct answer will be “a little bit”, in that there’s no way to avoid all harms without just not being in the space at all, and I think that would be a clear mistake for EA; though I am also inclined to think that the correct answer is “somewhat less than at present”).
Agree with this take ⬆️
My mental health has greatly improved since joining EA and I think that’s because:
the culture encourages having an internal locus of control (or being agentic) which is associated with better mental health outcomes
it faced me with the reality that I’m incredibly privileged in global terms so I should be using that to help others rather than feeling sorry for myself
helping others is intrinsically satisfying
I do think there’s more that could be done to develop psychological safety and remind people that their intrinsic value is separate from their instrumental value to the EA movement.
This is why I do community building work.
Idk I’m not a maximiser but I do think it’s useful to have barriers to entry that require strong signals of shared values. I’m not interested in running a social club for privileged people that aren’t actually contributing money and/or labour into EA causes.
I think most EAs living in rich countries should by default be working normal jobs while donating 10% and contributing to EA projects on the side. That does help calibrate with the wider world.
Exploring what’s helpful is definitely an interesting angle that generates ideas. One idea that comes to mind is how EA communicates around the Top Charities Fund, basically “let us do the heavy lifting and we’ll do our best to figure out where your donations will have impact”. This has two particular attributes that I like. Firstly it provides maximum ease for a reader to just accept a TLDR and feel good about their choice (and this is generally positive for a non-EA donator independent of how good or bad TCF’s picks are). Secondly, I think the messaging is more neutral and a bit closer to invitational consent culture. Hardcore EA is more likely to imply that you “should” think and care about whether TCF is actually a good fund and decide for yourself, but the consent culture version might be psychologically beneficial to both EAs and non-EAs while achieving the same or better numeric outcomes.