I’ve been working a few hours per week at the Effective Altruism Infrastructure Fund as a Fund Manager since Summer this year.
EA’s reputation is at a bit of a low point. I’ve even heard EA described as the ‘boogeyman’ in certain well-meaning circles. So why do I feel inclined to double down on effective altruism rather than move onto other endeavours? Some shower thoughts:
I generally endorse aiming directly for the thing you actually care about. It seems higher integrity, and usually more efficient. I want to do the most good possible, and this goal already has a name and community attached to it; EA.
I find the core, underlying principles very compelling. The Centre for Effective Altruism highlights scope sensitivity, impartiality, recognition of tradeoffs, and the Scout Mindset. I endorse all of these!
Seems to me that EA has a good track record of important insights on otherwise neglected topics. Existential risk, risks of astronomical suffering, AI safety, wild animal suffering; I attribute a lot of success in these nascent fields to the insights of people with a shared commitment to EA principles and goals.
Of course, there’s been a lot of progress on slightly less neglected cause areas too. The mind boggles at the sheer number of human lives saved and the vast amount of animal suffering reduced by organisations funded by Open Philanthropy, for example.
I have personally benefited massively in achieving my own goals. Beyond some of the above insights, I attribute many improvements in my productivity and epistemics to discussions and recommendations that arose out of the pursuit of EA.
In other roles or projects I’m considering, when I think of questions like “who will actually realistically consider acting on this idea I think is great? Giving up their time or money to make this happen?” the most obvious and easiest answer often looks like some subset of the EA community. Obviously there are some echo chamber-y and bias-related reasons that might feed into this, but I think there are some real and powerful ones too.
Written quickly (15-20 mins), not neatly/well (originally to post on LinkedIn rather than here). There are better takes on this topic (e.g.).
There are some pragmatic, career-focused reasons too of course. I’m better networked inside EA than outside of it. I have long thought grantmaking is a career direction I’d like to try my hand at, and this seemed like a good specific opportunity for me.
Further caveats I didn’t have space to make on LinkedIn: I wrote this quick take as an individual, not for EAIF or my other projects etc; I haven’t checked this with colleagues. There are also identity-related and bias reasons that draw me to stay involved with EA. Seems clear that EA has had a lot of negative impact too. And of course we have deep empirical and moral uncertainty about what’s actually good and useful in the long-run after accounting for indirect effects. I haven’t attempted any sort of overall quantitative analysis of the overall effects.
But in any case, I still expect that overall EA has been and will be a positive force for good. And I’m excited to be contributing to EAIF’s mission. I just wrote a post about Ideas EAIF is excited to receive applications for; please consider checking that out if any of this resonates and/or you have ideas about how to improve EA and the impact of projects making use of EA principles!
I have my responses to the question you raised: “So why do I feel inclined to double down on effective altruism rather than move onto other endeavours?”
I have doubled down a lot over the last ~1.5 years. I am not at all shy about being an EA; it is even on my LinkedIn!
This is partly because of integrity and honesty reasons. Yes, I care about animals and AI and like math and rationality and whatnot. All this is a part of who I am.
Funnily enough, a non-negligible reason why I have doubled down (and am more pro-EA than before) is the sheer quantity of not-so-good critiques. And they keep publishing them.
Another reason is because there are bizarre caricatures of EAs out there. No, we are not robotic utility maximizers. In my personal interactions, when people hopefully realize that “okay this is a just another feel-y human with a bunch of interests who happens to be vegan and feels strongly about donations.”
“I have personally benefited massively in achieving my own goals.” — I hope this experience is more common!
I feel EA/adjacent community epistemics have enormously improved my mental health and decision-making; being in the larger EA-sphere has improved my view of life; I have more agency; I am much more open to newer ideas, even those I vehemently disagree with; I am much more sympathetic to value and normative pluralism than before!
I wish more ever day EAs were louder about their EA-ness.
I’ve been working a few hours per week at the Effective Altruism Infrastructure Fund as a Fund Manager since Summer this year.
EA’s reputation is at a bit of a low point. I’ve even heard EA described as the ‘boogeyman’ in certain well-meaning circles. So why do I feel inclined to double down on effective altruism rather than move onto other endeavours? Some shower thoughts:
I generally endorse aiming directly for the thing you actually care about. It seems higher integrity, and usually more efficient. I want to do the most good possible, and this goal already has a name and community attached to it; EA.
I find the core, underlying principles very compelling. The Centre for Effective Altruism highlights scope sensitivity, impartiality, recognition of tradeoffs, and the Scout Mindset. I endorse all of these!
Seems to me that EA has a good track record of important insights on otherwise neglected topics. Existential risk, risks of astronomical suffering, AI safety, wild animal suffering; I attribute a lot of success in these nascent fields to the insights of people with a shared commitment to EA principles and goals.
Of course, there’s been a lot of progress on slightly less neglected cause areas too. The mind boggles at the sheer number of human lives saved and the vast amount of animal suffering reduced by organisations funded by Open Philanthropy, for example.
I have personally benefited massively in achieving my own goals. Beyond some of the above insights, I attribute many improvements in my productivity and epistemics to discussions and recommendations that arose out of the pursuit of EA.
In other roles or projects I’m considering, when I think of questions like “who will actually realistically consider acting on this idea I think is great? Giving up their time or money to make this happen?” the most obvious and easiest answer often looks like some subset of the EA community. Obviously there are some echo chamber-y and bias-related reasons that might feed into this, but I think there are some real and powerful ones too.
Written quickly (15-20 mins), not neatly/well (originally to post on LinkedIn rather than here). There are better takes on this topic (e.g.).
There are some pragmatic, career-focused reasons too of course. I’m better networked inside EA than outside of it. I have long thought grantmaking is a career direction I’d like to try my hand at, and this seemed like a good specific opportunity for me.
Further caveats I didn’t have space to make on LinkedIn: I wrote this quick take as an individual, not for EAIF or my other projects etc; I haven’t checked this with colleagues. There are also identity-related and bias reasons that draw me to stay involved with EA. Seems clear that EA has had a lot of negative impact too. And of course we have deep empirical and moral uncertainty about what’s actually good and useful in the long-run after accounting for indirect effects. I haven’t attempted any sort of overall quantitative analysis of the overall effects.
But in any case, I still expect that overall EA has been and will be a positive force for good. And I’m excited to be contributing to EAIF’s mission. I just wrote a post about Ideas EAIF is excited to receive applications for; please consider checking that out if any of this resonates and/or you have ideas about how to improve EA and the impact of projects making use of EA principles!
I agree with so much here.
I have my responses to the question you raised: “So why do I feel inclined to double down on effective altruism rather than move onto other endeavours?”
I have doubled down a lot over the last ~1.5 years. I am not at all shy about being an EA; it is even on my LinkedIn!
This is partly because of integrity and honesty reasons. Yes, I care about animals and AI and like math and rationality and whatnot. All this is a part of who I am.
Funnily enough, a non-negligible reason why I have doubled down (and am more pro-EA than before) is the sheer quantity of not-so-good critiques. And they keep publishing them.
Another reason is because there are bizarre caricatures of EAs out there. No, we are not robotic utility maximizers. In my personal interactions, when people hopefully realize that “okay this is a just another feel-y human with a bunch of interests who happens to be vegan and feels strongly about donations.”
“I have personally benefited massively in achieving my own goals.” — I hope this experience is more common!
I feel EA/adjacent community epistemics have enormously improved my mental health and decision-making; being in the larger EA-sphere has improved my view of life; I have more agency; I am much more open to newer ideas, even those I vehemently disagree with; I am much more sympathetic to value and normative pluralism than before!
I wish more ever day EAs were louder about their EA-ness.