I think we should move away from messaging like “Action X only saves 100 lives. Spending money on malaria nets instead would save 10000 lives. Therefore action X sucks.” Not everyone trusts the GiveWell numbers, and it really is valuable to save 100 lives in any absolute way you look at it.
I understand why doctors might come to EA with a bad first impression given the anti-doctor sentiment. But we need doctors! We need doctors to help develop high-impact medical interventions, design new vaccines, work on anti-pandemic plans, and so many other things. We should have an answer for doctors who are asking, what is the most good I can do with my work, that is not merely asking them to donate money.
I absolutely think we should stick to that messaging. Trying to do the the most good, rather than some good is the core of our movement. I would point out that there are also many doctors who were not discouraged and chose to change their career entirely as a result of EA. I personally know a few who ended up working on the very things you encourage!
That said we should of course be careful when discouraging interventions if we haven’t looked into the details of each cost-effectiveness analysis, as it’s easy to arrive at a lower looking impact simply due to methodological differences between Givewell’s cost-effectiveness analysis and yours.
There are some medics who completely buy EA and have changed their entire career directly in line with EA philosophy
There are some medics who are looking to increase and maximise the impact of their careers, but who aren’t sold on all or aspects of EA. They also may have a particular cause area preference e.g. global medical education, that isn’t thought of as a high impact cause area by EAs
I think our philosophy is to work with both of these groups, rather than just (1).[1] I think the way we do that is by acknowledging that EA is fundamentally a question; we talk through EA ideology and frameworks without being prescriptive about the ‘answers’ and conclusions of what people should work on.
I think that this recent summary on a post on the forum is quite helpful here
I think the “bait and switch” of EA (sell the “EA is a question” but seem to deliver “EA is these specific conclusions”) is self-limiting for our total impact. This is self-limiting because:
It limits the size of our community (put off people who see it as a bait and switch)
It limits the quality of the community (groupthink, echo chambers, overfishing small ponds etc)
We lose allies
We create enemies
Impact is a product of: size (community + allies) * quality (community + allies) - actions of enemies actively working against us.
If we decrease size and quality of community and allies while increasing the size and veracity of people working against us then we limit our impact.
Thanks for your comment and completely agree with you! I think the framing of what is the most I can do with my work is a great one that is underappreciated.
I’d like to push back a bit on that—it’s so common in the EA world to say, if you don’t believe in malaria nets, you must have an emotional problem. But there are many rational critiques of malaria nets. Malaria nets should not be this symbol where believing in them is a core part of the EA faith.
it’s so common in the EA world to say, if you don’t believe in malaria nets, you must have an emotional problem.
I’m not saying that.
The point I was trying to make was actually the opposite—that even for the “cold and calculating” EAs it can be emotionally difficult to choose the intervention (in this case malaria nets) which doesn’t give you the “fuzzies” or feeling of doing good that something else might.
I was trying to say that it’s normal to feel like some decisions are emotionally harder than others, and framings which focus on that may be likely to come across as dismissive of other people’s actions. (Of course, i didn’t elaborate this in the original comment)
Malaria nets should not be this symbol where believing in them is a core part of the EA faith.
I don’t make this claim in my comment—I am just using malaria nets as an example since you used it earlier, and it’s an accepted shorthand for “commonly recommended effective intervention” (but maybe we should just say that—maybe we shouldn’t use the shorthand).
I think I sit somewhere between you both- broadly we think that there shouldn’t be “one” road to impact ; whether that be bed nets or something else
Our explicit purpose is to use EA frameworks and thinking to help people reach their own conclusions. We think that common EA causes are very promising and Very likely to be highly impactful, but we err on the side of caution in being overly prescriptive.
I think we should move away from messaging like “Action X only saves 100 lives. Spending money on malaria nets instead would save 10000 lives. Therefore action X sucks.” Not everyone trusts the GiveWell numbers, and it really is valuable to save 100 lives in any absolute way you look at it.
I understand why doctors might come to EA with a bad first impression given the anti-doctor sentiment. But we need doctors! We need doctors to help develop high-impact medical interventions, design new vaccines, work on anti-pandemic plans, and so many other things. We should have an answer for doctors who are asking, what is the most good I can do with my work, that is not merely asking them to donate money.
I absolutely think we should stick to that messaging. Trying to do the the most good, rather than some good is the core of our movement. I would point out that there are also many doctors who were not discouraged and chose to change their career entirely as a result of EA. I personally know a few who ended up working on the very things you encourage!
That said we should of course be careful when discouraging interventions if we haven’t looked into the details of each cost-effectiveness analysis, as it’s easy to arrive at a lower looking impact simply due to methodological differences between Givewell’s cost-effectiveness analysis and yours.
Let’s separate this out
There are some medics who completely buy EA and have changed their entire career directly in line with EA philosophy
There are some medics who are looking to increase and maximise the impact of their careers, but who aren’t sold on all or aspects of EA. They also may have a particular cause area preference e.g. global medical education, that isn’t thought of as a high impact cause area by EAs
I think our philosophy is to work with both of these groups, rather than just (1).[1] I think the way we do that is by acknowledging that EA is fundamentally a question; we talk through EA ideology and frameworks without being prescriptive about the ‘answers’ and conclusions of what people should work on.
I think that this recent summary on a post on the forum is quite helpful here
We do fundamentally serve (1) and think this is a great group of people we shouldnt miss either!
Thanks for your comment and completely agree with you! I think the framing of what is the most I can do with my work is a great one that is underappreciated.
I really like framings which acknowledge how hard (emotionally) it can be to choose malaria nets.
I’d like to push back a bit on that—it’s so common in the EA world to say, if you don’t believe in malaria nets, you must have an emotional problem. But there are many rational critiques of malaria nets. Malaria nets should not be this symbol where believing in them is a core part of the EA faith.
I’m not saying that.
The point I was trying to make was actually the opposite—that even for the “cold and calculating” EAs it can be emotionally difficult to choose the intervention (in this case malaria nets) which doesn’t give you the “fuzzies” or feeling of doing good that something else might.
I was trying to say that it’s normal to feel like some decisions are emotionally harder than others, and framings which focus on that may be likely to come across as dismissive of other people’s actions. (Of course, i didn’t elaborate this in the original comment)
I don’t make this claim in my comment—I am just using malaria nets as an example since you used it earlier, and it’s an accepted shorthand for “commonly recommended effective intervention” (but maybe we should just say that—maybe we shouldn’t use the shorthand).
I think I sit somewhere between you both- broadly we think that there shouldn’t be “one” road to impact ; whether that be bed nets or something else Our explicit purpose is to use EA frameworks and thinking to help people reach their own conclusions. We think that common EA causes are very promising and Very likely to be highly impactful, but we err on the side of caution in being overly prescriptive.