The point of EA is to work out how to do the most good, then do it. There are three target groups one might try to benefit - (1) (far) future lives, (2) near-term humans, (3) (near-term) animals. Given this, one cannot, in good faith, call something an ‘introduction’ when it focuses almost exclusively on object-level attempts to benefit just one group.
This is a specific way of framing EA, and one that I think feels natural in part for ‘sociology and history of EA’ reasons: individual EAs often self-identify as either interested in existential risk, interested in animal welfare, or interested in third-world development, in large part due to the early influence of Peter Singer, GiveWell, LessWrong, and the Oxford longtermists, who broke in different directions on these questions. The EA Funds use a division like this, and early writing about EA liked to emphasize this division.
But I don’t agree that this is the most natural (much less the only reasonable) way of dividing up the space of high-impact altruistic goals or projects, so I don’t think all intro resources should emphasize this framing.
If you’d framed EA as being about ‘(1) causing positive experiences and (2) preventing negative ones’, you could have argued that EA is about the choice between negative-leaning and positive-leaning utilitarianism, and that all intro resources must put similar emphasis on those two perspectives (regardless of the merits of the perspectives in the eyes of the intro-resource-maker).
If you’d framed EA as being about ‘direct aid, institution reform, cause prioritization, and improving EAs’ effectiveness’, you could argue that any intro resource is obviously bad if it neglects any one of those categories, even if it’s just because they’re carving up the space differently.
If you’d framed EA as being about ‘helping people in the developed world, helping people in the developing world, helping animals, or helping far-future lives’, then we’d have needed to give equal prominence to more nationalist and regionalist perspectives on altruism as well.
My main objection is to the structure of this argument. There are worlds where EA initially considered it an open question whether nationalism is a reasonable perspective to bring to cause prioritization; and worlds where lots of EAs later realized they were wrong and nationalism isn’t a good perspective. In those worlds, it’s important that we not be so wedded to early framings of ‘the key disagreements in the movement’ that no one can ever move on from treating nationalist-EA as a contender.
(This isn’t intended as an argument for ‘our situation is analogous to the nationalism one’; it’s intended as a structural objection to arguments that take for granted a certain framing of EA, require all intro sources to fit that frame, and make it hard to update away from that frame in worlds where some of the options do turn out to be bad.)
Hi Rob, I agree with your and Ryan’s point that the poverty/animals/future split is something that evolved because of EA’s history, and I can imagine a world with different categories of cause areas.
But something that I keep seeing missed is this:
“A good introduction to EA would, at the very least, include a wide range of steel-manned positions about how to do the most good that are held by sincere, thoughtful, individuals aspiring to do the most good.”
I’m really troubled by any “Introduction to EA” that suggests EA is about long-termism. A brief intro saying “by the way, some people have different views to the following 20 hours of content!” is not sufficient. This should be relabelled as an intro to EA long-termism if it remains in its current form.
Although I understand the nationalism example isn’t meant to be analogous, but my impression is this structural objection only really applies when our situation is analogous.
If historically EA paid a lot of attention to nationalism (or trans-humanism, the scepticism community, or whatever else) but had by-and-large collectively ‘moved on’ from these, contemporary introductions to the field shouldn’t feel obliged to cover them extensively, nor treat it the relative merits of what they focus on now versus then as an open question.
Yet, however you slice it, EA as it stands now hasn’t by-and-large ‘moved on’ to be ‘basically longtermism’, where its interest in (e.g) global health is clearly atavistic. I’d be willing to go to bat for substantial slants to longtermism, as (I aver) its over-representation amongst the more highly engaged and the disproportionate migration of folks to longtermism from other areas warrants claims that epistocratic weighting of consensus would favour longtermism over anything else. But even this has limits which ‘greatly favouring longtermism over everything else’ exceeds.
How you choose to frame an introduction is up for grabs, and I don’t think ‘the big three’ is the only appropriate game in town. Yet if your alternative way of framing an introduction to X ends up strongly favouring one aspect (further, the one you are sympathetic to) disproportionate to any reasonable account of its prominence within X, something has gone wrong.
This isn’t the main problem I had in mind, but it’s worth noting that EA animal advocacy is also aimed at improving welfare and/or preventing suffering in future minds, even when it’s not aimed at far-future animals. The goal of factory farm reform for chickens is to affect (or prevent) future chickens, not chickens that are alive at the time people develop or push for the reform.
This is a specific way of framing EA, and one that I think feels natural in part for ‘sociology and history of EA’ reasons: individual EAs often self-identify as either interested in existential risk, interested in animal welfare, or interested in third-world development, in large part due to the early influence of Peter Singer, GiveWell, LessWrong, and the Oxford longtermists, who broke in different directions on these questions. The EA Funds use a division like this, and early writing about EA liked to emphasize this division.
But I don’t agree that this is the most natural (much less the only reasonable) way of dividing up the space of high-impact altruistic goals or projects, so I don’t think all intro resources should emphasize this framing.
If you’d framed EA as being about ‘(1) causing positive experiences and (2) preventing negative ones’, you could have argued that EA is about the choice between negative-leaning and positive-leaning utilitarianism, and that all intro resources must put similar emphasis on those two perspectives (regardless of the merits of the perspectives in the eyes of the intro-resource-maker).
If you’d framed EA as being about ‘direct aid, institution reform, cause prioritization, and improving EAs’ effectiveness’, you could argue that any intro resource is obviously bad if it neglects any one of those categories, even if it’s just because they’re carving up the space differently.
If you’d framed EA as being about ‘helping people in the developed world, helping people in the developing world, helping animals, or helping far-future lives’, then we’d have needed to give equal prominence to more nationalist and regionalist perspectives on altruism as well.
My main objection is to the structure of this argument. There are worlds where EA initially considered it an open question whether nationalism is a reasonable perspective to bring to cause prioritization; and worlds where lots of EAs later realized they were wrong and nationalism isn’t a good perspective. In those worlds, it’s important that we not be so wedded to early framings of ‘the key disagreements in the movement’ that no one can ever move on from treating nationalist-EA as a contender.
(This isn’t intended as an argument for ‘our situation is analogous to the nationalism one’; it’s intended as a structural objection to arguments that take for granted a certain framing of EA, require all intro sources to fit that frame, and make it hard to update away from that frame in worlds where some of the options do turn out to be bad.)
Hi Rob, I agree with your and Ryan’s point that the poverty/animals/future split is something that evolved because of EA’s history, and I can imagine a world with different categories of cause areas.
But something that I keep seeing missed is this:
I’m really troubled by any “Introduction to EA” that suggests EA is about long-termism. A brief intro saying “by the way, some people have different views to the following 20 hours of content!” is not sufficient. This should be relabelled as an intro to EA long-termism if it remains in its current form.
Although I understand the nationalism example isn’t meant to be analogous, but my impression is this structural objection only really applies when our situation is analogous.
If historically EA paid a lot of attention to nationalism (or trans-humanism, the scepticism community, or whatever else) but had by-and-large collectively ‘moved on’ from these, contemporary introductions to the field shouldn’t feel obliged to cover them extensively, nor treat it the relative merits of what they focus on now versus then as an open question.
Yet, however you slice it, EA as it stands now hasn’t by-and-large ‘moved on’ to be ‘basically longtermism’, where its interest in (e.g) global health is clearly atavistic. I’d be willing to go to bat for substantial slants to longtermism, as (I aver) its over-representation amongst the more highly engaged and the disproportionate migration of folks to longtermism from other areas warrants claims that epistocratic weighting of consensus would favour longtermism over anything else. But even this has limits which ‘greatly favouring longtermism over everything else’ exceeds.
How you choose to frame an introduction is up for grabs, and I don’t think ‘the big three’ is the only appropriate game in town. Yet if your alternative way of framing an introduction to X ends up strongly favouring one aspect (further, the one you are sympathetic to) disproportionate to any reasonable account of its prominence within X, something has gone wrong.
This isn’t the main problem I had in mind, but it’s worth noting that EA animal advocacy is also aimed at improving welfare and/or preventing suffering in future minds, even when it’s not aimed at far-future animals. The goal of factory farm reform for chickens is to affect (or prevent) future chickens, not chickens that are alive at the time people develop or push for the reform.