The “misery trap” section feels like it is describing a problem that EA definitely had early on, but mostly doesn’t now?
In early EA, people started thinking hard about this idea of doing the most good they could. Naively, this suggests doing things like giving up things that are seriously important to you (like having children), illegibily make you more productive (like a good work environment), or provide important flexibility (like having free time), and the author quotes some early EAs struggling with this conflict, like:
> my inner voice in early 2016 would automatically convert all money I spent (eg on dinner) to a fractional “death counter” of lives in expectation I could have saved if I’d donated it to good charities. Most EAs I mentioned that to at the time were like ah yeah seems reasonable
I really don’t think many EAs would say “seems reasonable” now. If someone said this to me I’d give some of my personal history with this idea and talk about how it turns out that in practice this works terribly for people: it makes you miserable, very slightly increases how much you have available to donate, and massively decreases your likely long-term impact through burnout, depression, and short-term thinking.
I think it’s not a coincidence that the examples the author links are 5+ years old? If people are still getting caught in this trap, though, I’d be interested to see more? (And potentially write more on why it’s not a good application of EA thinking.)
In case people are curious: Julia and I now have three kids and it’s been 10+ years since stress and conflict about painful trade-offs between our own happiness and making the world better were a major issue for us.
I think this has gotten better, but not as much better as you would hope considering how long EAs have known this is a problem, how much they have discussed it being a problem, and how many resources have gone into trying to address it. I think there’s actually a bit of an unfortunate fallacy here that it isn’t really an issue anymore because EA has gone through the motions to address it and had at least some degree of success, see Sasha Chapin’s relevant thoughts:
Some of the remaining problem might come down to EA filtering for people who already have demanding moral views and an excessively conscientious personality. Some of it is probably due to the “by-catch” phenomenon the anon below discusses that comes with applying expected value reasoning to having a positively impactful career (still something widely promoted, and probably for good reason overall). Some of it is this other, deeper tension that I think Nielson is getting at:
Many people in Effective Altruism (I don’t think most, but many, including some of the most influential) believe in a standard of morality that is too demanding for it to be realistic for real people to reach it. Given the prevalence of actualist over possiblist reasoning in EA ethics, and just not being totally naive about human psychology, pretty much everyone who does believe this is onboard with compartmentalizing do-gooding or do-besting from the rest of their life. The trouble runs deeper than this unfortunately though, because once you buy an argument that letting yourself have this is what will be best for doing good overall, you are already seriously risking undermining the psychological benefits.
Whenever you do something for yourself, there is a voice in the back of your head asking if you are really so morally weak that this particular thing is necessary. Even if you overcome this voice, there is a worse voice that instrumentalizes the things you do for yourself. Buying icecream? This is now your “anti-burnout icecream”. Worse, have a kid (if you, like in Nielson’s example, think this isn’t part of your best set of altruistic decisions), this is your “anti-burnout kid”.
It’s very hard to get around this one. Nielson’s preferred solution would clearly be that people just don’t buy this very demanding theory of morality at all, because he thinks that it is wrong. That said, he doesn’t really argue for this, and for those of us who actually do think that the demanding ideal of morality happens to be correct, it isn’t an open avenue for us.
The best solution as far as I can tell is to distance your intuitive worldview from this standard of morality as much as possible. Make it a small part of your mind, that you internalize largely on an academic level, and maybe take out on rare occasions for inspiration, but insist on not viewing your day to day life through it. Again though, the trickiness of this, I think, is a real part of the persistence of some of this problem, and I think Nielson nails this part.
Throwaway account to give a vague personal anecdote. I agree this has gotten better for some, but I think this is still a problem (a) that new people have to work out for themselves, going through the stages on their own, perhaps faster than happened 5 years ago; (b) that hits people differently if they are “converted” to EA but not as successful in their pursuit of impact. These people are left in a precarious psychological position.
I experienced both. I think of myself as “EA bycatch.” By the time I went through the phases of thinking through all of this for myself, I had already sacrificed a lot of things in the name of impact that I can’t get back (money, time, alternative professional opportunities, relationships, etc). Frankly some things got wrecked in my life that can’t be put back together. Being collateral damage for the cause feels terrible, but I really do hope the work brings results and is worth it.
The “misery trap” section feels like it is describing a problem that EA definitely had early on, but mostly doesn’t now?
In early EA, people started thinking hard about this idea of doing the most good they could. Naively, this suggests doing things like giving up things that are seriously important to you (like having children), illegibily make you more productive (like a good work environment), or provide important flexibility (like having free time), and the author quotes some early EAs struggling with this conflict, like:
> my inner voice in early 2016 would automatically convert all money I spent (eg on dinner) to a fractional “death counter” of lives in expectation I could have saved if I’d donated it to good charities. Most EAs I mentioned that to at the time were like ah yeah seems reasonable
I really don’t think many EAs would say “seems reasonable” now. If someone said this to me I’d give some of my personal history with this idea and talk about how it turns out that in practice this works terribly for people: it makes you miserable, very slightly increases how much you have available to donate, and massively decreases your likely long-term impact through burnout, depression, and short-term thinking.
One piece of writing that I think was helpful in turning this around was http://www.givinggladly.com/2013/06/cheerfully.html Another was https://www.benkuhn.net/box/
I think it’s not a coincidence that the examples the author links are 5+ years old? If people are still getting caught in this trap, though, I’d be interested to see more? (And potentially write more on why it’s not a good application of EA thinking.)
In case people are curious: Julia and I now have three kids and it’s been 10+ years since stress and conflict about painful trade-offs between our own happiness and making the world better were a major issue for us.
I think this has gotten better, but not as much better as you would hope considering how long EAs have known this is a problem, how much they have discussed it being a problem, and how many resources have gone into trying to address it. I think there’s actually a bit of an unfortunate fallacy here that it isn’t really an issue anymore because EA has gone through the motions to address it and had at least some degree of success, see Sasha Chapin’s relevant thoughts:
https://web.archive.org/web/20220405152524/https://sashachapin.substack.com/p/your-intelligent-conscientious-in?s=r
Some of the remaining problem might come down to EA filtering for people who already have demanding moral views and an excessively conscientious personality. Some of it is probably due to the “by-catch” phenomenon the anon below discusses that comes with applying expected value reasoning to having a positively impactful career (still something widely promoted, and probably for good reason overall). Some of it is this other, deeper tension that I think Nielson is getting at:
Many people in Effective Altruism (I don’t think most, but many, including some of the most influential) believe in a standard of morality that is too demanding for it to be realistic for real people to reach it. Given the prevalence of actualist over possiblist reasoning in EA ethics, and just not being totally naive about human psychology, pretty much everyone who does believe this is onboard with compartmentalizing do-gooding or do-besting from the rest of their life. The trouble runs deeper than this unfortunately though, because once you buy an argument that letting yourself have this is what will be best for doing good overall, you are already seriously risking undermining the psychological benefits.
Whenever you do something for yourself, there is a voice in the back of your head asking if you are really so morally weak that this particular thing is necessary. Even if you overcome this voice, there is a worse voice that instrumentalizes the things you do for yourself. Buying icecream? This is now your “anti-burnout icecream”. Worse, have a kid (if you, like in Nielson’s example, think this isn’t part of your best set of altruistic decisions), this is your “anti-burnout kid”.
It’s very hard to get around this one. Nielson’s preferred solution would clearly be that people just don’t buy this very demanding theory of morality at all, because he thinks that it is wrong. That said, he doesn’t really argue for this, and for those of us who actually do think that the demanding ideal of morality happens to be correct, it isn’t an open avenue for us.
The best solution as far as I can tell is to distance your intuitive worldview from this standard of morality as much as possible. Make it a small part of your mind, that you internalize largely on an academic level, and maybe take out on rare occasions for inspiration, but insist on not viewing your day to day life through it. Again though, the trickiness of this, I think, is a real part of the persistence of some of this problem, and I think Nielson nails this part.
(edited on 10/24/22 to replace broken link)
Throwaway account to give a vague personal anecdote. I agree this has gotten better for some, but I think this is still a problem (a) that new people have to work out for themselves, going through the stages on their own, perhaps faster than happened 5 years ago; (b) that hits people differently if they are “converted” to EA but not as successful in their pursuit of impact. These people are left in a precarious psychological position.
I experienced both. I think of myself as “EA bycatch.” By the time I went through the phases of thinking through all of this for myself, I had already sacrificed a lot of things in the name of impact that I can’t get back (money, time, alternative professional opportunities, relationships, etc). Frankly some things got wrecked in my life that can’t be put back together. Being collateral damage for the cause feels terrible, but I really do hope the work brings results and is worth it.
Somehow, that givinggladly.com link is broken for me. Here is an archived version: https://web.archive.org/web/20220412232153/http://www.givinggladly.com/2013/06/cheerfully.html