Nice! I like these kinds of synthesis posts, especially when they try to be comprehensive. One could also add:
EA as a “gap-filling gel” within the context of existing society and its altruistic tendencies (I think I heard this general idea (not the name) at Macaskill’s EAG London closing remarks, but the video isn’t up yet so I’m not sure and don’t want to put words in his mouth). The idea is that there’s already lots of work in:
Making people healthier
Reducing poverty
Animal welfare
National/international security and diplomacy (incl. nukes, bioweapons)
And if none of these existed, “doing the most good in the world” would be an even more massive undertaking than it might already seem, e.g. we’d likely “start” with inventing the field of medicine from scratch.
But a large amount of altruistic effort does exist, it’s just that it’s not optimally directed when viewed globally, because it’s mostly shaped by people who only think about their local region of it. Consequently, altruism as a whole has several blind spots:
Making people healthier and/or reducing poverty in the developing world through certain interventions (e.g. bednets, direct cash transfers) that turn out to work really well
Animal welfare for factory-farmed and/or wild animals
Global security from technologies whose long-term risks are neglected (e.g. AI)
And the role of EA is to fill those gaps within the altruistic portfolio.
As an antithesis to that mode of thinking, we could also view:
EA as foundational rethinking of our altruistic priorities, to the extent we view those priorities as misdirected. Examples:
Some interventions which were posed with altruistic goals in mind turn out to be useless or even net-negative when scrutinized (e.g. Scared Straight)
Many broader trends which seem “obviously good” such as economic growth or technological progress, seem neutral, uncertain, or even net-negative in light of certain longtermist thinking
Nice! I like these kinds of synthesis posts, especially when they try to be comprehensive. One could also add:
EA as a “gap-filling gel” within the context of existing society and its altruistic tendencies (I think I heard this general idea (not the name) at Macaskill’s EAG London closing remarks, but the video isn’t up yet so I’m not sure and don’t want to put words in his mouth). The idea is that there’s already lots of work in:
Making people healthier
Reducing poverty
Animal welfare
National/international security and diplomacy (incl. nukes, bioweapons)
And if none of these existed, “doing the most good in the world” would be an even more massive undertaking than it might already seem, e.g. we’d likely “start” with inventing the field of medicine from scratch.
But a large amount of altruistic effort does exist, it’s just that it’s not optimally directed when viewed globally, because it’s mostly shaped by people who only think about their local region of it. Consequently, altruism as a whole has several blind spots:
Making people healthier and/or reducing poverty in the developing world through certain interventions (e.g. bednets, direct cash transfers) that turn out to work really well
Animal welfare for factory-farmed and/or wild animals
Global security from technologies whose long-term risks are neglected (e.g. AI)
And the role of EA is to fill those gaps within the altruistic portfolio.
As an antithesis to that mode of thinking, we could also view:
EA as foundational rethinking of our altruistic priorities, to the extent we view those priorities as misdirected. Examples:
Some interventions which were posed with altruistic goals in mind turn out to be useless or even net-negative when scrutinized (e.g. Scared Straight)
Many broader trends which seem “obviously good” such as economic growth or technological progress, seem neutral, uncertain, or even net-negative in light of certain longtermist thinking