Relevant to this, in the following article MacAskill provides the following account of what EA is:
What Is Effective altruism?
As defined by the leaders of the movement, effective altruism is the use of evidence and reason to work out how to benefit others as much as possible and the taking action on that basis.11 So defined, effective altruism is a project rather than a set of normative commitments. It is both a research project—to figure out how to do the most good—and a practical project, of implementing the best guesses we have about how to do the most good. There are some defining characteristics of the effective altruist research project. The project is:
Maximizing. The point of the project is to try to do as much good as possible.
Science-aligned. The best means to figuring out how to do the most good is the scientific method, broadly construed to include reliance on both empirical observation and careful rigorous argument or theoretical models.
Tentatively welfarist. As a tentative hypothesis or a first approximation, goodness is about improving the welfare of individuals.
Impartial. Everyone’s welfare is to count equally
Also, you’ve accidentally posted the same thing three times, if you hadn’t noticed already.
I like “science-aligned” better than “secular”, since the former implies the latter as well as a bunch of other important concepts.
Also, it’s worth noting that “everyone’s welfare is to count equally” in Will’s account is approximately equivalent to “effective altruism values all people equally” in Ozymandias’ account, but neither of them imply the following paraphrase: “from the effective altruism perspective, saving the life of a baby in Africa is exactly as good as saving the life of a baby in America, which is exactly as good as saving the life of Ozy’s baby specifically.” I understand the intention of that phrase, but actually I’d save whichever baby would grow up to have the best life. Is there any better concrete description of what impartiality actually implies?
“Whichever will have the best life” seems very compatible with “welfare.” I agree there are a lot of considerations that aren’t obviously indicated by any of these definitions, though, even if they’re compatible.
Yes, I’d be excited to always include something about epistemics, such as scientific mindset. One can then argue about evidence instead of whether something qualifies as secular, which seems only relevant insofar as it is a weak predictor of well-evidenced-ness. In particular, while I don’t assign it high credence, I would not be hasty to rule out maximising ‘enlightenment’ as one of my end goals. Terminal goals are weird.
Notably, without an epistemic/scientific part to the definition, it is unclear how to distinguish many current EA approaches from e.g. Playpumps, which is a secular project which was hyped for its extraordinary outcomes in helping people far away. Looking forward, I also think that strong epistemics are how long-term-focused EA efforts can continue to be more useful than regular futurism.
Relevant to this, in the following article MacAskill provides the following account of what EA is:
Also, you’ve accidentally posted the same thing three times, if you hadn’t noticed already.
I like “science-aligned” better than “secular”, since the former implies the latter as well as a bunch of other important concepts.
Also, it’s worth noting that “everyone’s welfare is to count equally” in Will’s account is approximately equivalent to “effective altruism values all people equally” in Ozymandias’ account, but neither of them imply the following paraphrase: “from the effective altruism perspective, saving the life of a baby in Africa is exactly as good as saving the life of a baby in America, which is exactly as good as saving the life of Ozy’s baby specifically.” I understand the intention of that phrase, but actually I’d save whichever baby would grow up to have the best life. Is there any better concrete description of what impartiality actually implies?
“Whichever will have the best life” seems very compatible with “welfare.” I agree there are a lot of considerations that aren’t obviously indicated by any of these definitions, though, even if they’re compatible.
Yes, I’d be excited to always include something about epistemics, such as scientific mindset. One can then argue about evidence instead of whether something qualifies as secular, which seems only relevant insofar as it is a weak predictor of well-evidenced-ness. In particular, while I don’t assign it high credence, I would not be hasty to rule out maximising ‘enlightenment’ as one of my end goals. Terminal goals are weird.
Notably, without an epistemic/scientific part to the definition, it is unclear how to distinguish many current EA approaches from e.g. Playpumps, which is a secular project which was hyped for its extraordinary outcomes in helping people far away. Looking forward, I also think that strong epistemics are how long-term-focused EA efforts can continue to be more useful than regular futurism.