Itâs very easy for any of us to call âEAâ as we see it and naturally make claims about the preferences of the community. But this would be very clearly circular. Iâd be tempted to defer to the EA Survey. AI was only the top cause of 16% of the EA Survey. Even among those employed full-time in a non-profit (maybe a proxy for full-time EAs), it was the top priority of 11.26%, compared to 44.22% for poverty and 6.46% for animal welfare.
As noted in the fb discussion, it seems unlikely full-time non-profit employment is a good proxy for âfull-time EAsâ (i.e. those working full time at an EA organisationâE2Gers would be one of a few groups who should also be considered âfull-time EAsâ in the broader sense of the term).
For this group, one could stipulate every group which posts updates to the EA newsletter (I looked at the last half-dozen or so, so any group which didnât have an update is excluded, but likely minor) is an EA group, and toting up a headcount of staff (I didnât correct for FTE, and excluded advisors/âfounders/âvolunteers/âfreelancers/âinternsâall of these decisions could be challenged) and recording the prevailing focus of the org gives something like this:
80000 hours (7 people) - Far future
ACE (17 people) - Animals
CEA (15 people) - Far future
CSER (11 people) - Far future
CFI (10 people) - Far future (I only included their researchers)
FHI (17 people) - Far future
FRI (5 people) - Far future
Givewell (20 people) - Global poverty
Open Phil (21 people) - Far future (mostly)
SI (3 people) - Animals
CFAR (11 people) - Far future
Rethink Charity (11 people) - Global poverty
WASR (3 people) - Animals
REG (4 people) - Far future [Edited after Jonas Vollmer kindly corrected me]
FLI (6 people) - Far future
MIRI (17 people) - Far future
TYLCS (11 people) - Global poverty
Totting this up, I get ~ two thirds of people work at orgs which focus on the far future (66%), 22% global poverty, and 12% animals. Although it is hard to work out the AI | far future proportion, Iâm pretty sure it is the majority, so 45% AI wouldnât be wildly off-kilter if we thought the EA handbook should represent the balance of âfull timeâ attention.
I doubt this should be the relevant metric of how to divvy-up space in the EA handbook. It also seems unclear how clear considerations of representation play in selecting content, or if so what is the key community to proportionately represent.
Yet I think Iâd be surprised if it wasnât the case that among those working âinâ EA, the majority work on the far future, and a plurality work on AI. It also agrees with my impression that the most involved in the EA community strongly skew towards the far future cause area in general and AI in particular. I think they do so, bluntly, because these people have better access to the balance of reason, which in fact favours these being the most important things to work on.
âfull-time EAsâ (i.e. those working full time at an EA organisationâE2Gers would be one of a few groups who should also be considered âfull-time EAsâ in the broader sense of the term).
I think this methodology is pretty suspicious. There are more ways to be a full-time EA (FTEA) that working at an EA org, or even E2Ging. Suppose someone spends their time working on, say, poverty out of an desire to do the most good, and thus works at a development NGO or for a governent. Neither development NGOs nor governments will count as an âEA orgâ on your definition because they wonât being posting updates to the EA newsletter. Why would they? The EA community has very little comparative advantage in solving poverty, so what we be the point in say, Oxfam or DFID sending update reports to the EA newsletter? It would frankly be bizarre for a government department to update the EA community. We might say âah, put people who work on poverty arenât really EAsâ but that would just beg the question.
I think your list undercounts the number of animal-focused EAs. For example, it excludes Sentience Politics, which provided updates through the EA newsletter in September 2016, January 2017, and July 2017. It also excludes the Good Food Institute, an organization which describes itself as âfounded to apply the principles of effective altruism (EA) to change our food system.â While GFI does not provide updates through the EA newsletter, its job openings are mentioned in the December 2017, January 2018, and March 2018 newsletters. Additionally, it excludes organizations like the Humane League, which while not explicitly EA, have been described as having a âlargely utilitarian worldview.â Though the Humane League does not provide updates through the EA newsletter, its job openings are mentioned in the April 2017 newsletters, February 2018, and March 2018.
Perhaps the argument for excluding GFI and the Humane League (while including direct work organizations in the long term future space) is that relatively few people in direct work animal organizations identify as EAs (while most people in direct work long term future organizations identify as EA). If this is the reason, I think itâd be good for someone to provide evidence for it. Also, if the idea behind this method of counting is to look at the revealed preference of EAs, then I think people earning to give have to be included, especially since earning to give appears to be more useful for farm animal welfare than for long term future causes.
(Most of the above also applies to global health organizations.)
I picked the âupdatesâ purely in the interests of time (easier to skim), that it gives some sense of what orgs are considered âEA orgsâ rather than âorgs doing EA workâ (a distinction which I accept is imprecise: would a GW top charity âcountâ?), and I (forlornly) hoped pointing to a method, however brief, would forestall suspicion about cherry-picking.
I meant the quick-and-dirty data gathering to be more an indicative sample than a census. Iâd therefore expect significant margin of error (but not so significant as to change the bottom line). Other relevant candidate groups are also left out: BERI, Charity Science, Founderâs Pledge, ?ALLFED. Iâd expect there are more.
I think while this headcount is not a good metric how to allocate space in the EA handbook, it is a quite valuable overview in itself!
Just as a caveat, the numbers should not be directly compared to numbers from EA survey, as the later included also cause-prioritization, rationality, meta, politics & more.
(Using such cathegories, some organizations would end up in classified in different boxes)
As noted in the fb discussion, it seems unlikely full-time non-profit employment is a good proxy for âfull-time EAsâ (i.e. those working full time at an EA organisationâE2Gers would be one of a few groups who should also be considered âfull-time EAsâ in the broader sense of the term).
For this group, one could stipulate every group which posts updates to the EA newsletter (I looked at the last half-dozen or so, so any group which didnât have an update is excluded, but likely minor) is an EA group, and toting up a headcount of staff (I didnât correct for FTE, and excluded advisors/âfounders/âvolunteers/âfreelancers/âinternsâall of these decisions could be challenged) and recording the prevailing focus of the org gives something like this:
80000 hours (7 people) - Far future
ACE (17 people) - Animals
CEA (15 people) - Far future
CSER (11 people) - Far future
CFI (10 people) - Far future (I only included their researchers)
FHI (17 people) - Far future
FRI (5 people) - Far future
Givewell (20 people) - Global poverty
Open Phil (21 people) - Far future (mostly)
SI (3 people) - Animals
CFAR (11 people) - Far future
Rethink Charity (11 people) - Global poverty
WASR (3 people) - Animals
REG (4 people) - Far future [Edited after Jonas Vollmer kindly corrected me]
FLI (6 people) - Far future
MIRI (17 people) - Far future
TYLCS (11 people) - Global poverty
Totting this up, I get ~ two thirds of people work at orgs which focus on the far future (66%), 22% global poverty, and 12% animals. Although it is hard to work out the AI | far future proportion, Iâm pretty sure it is the majority, so 45% AI wouldnât be wildly off-kilter if we thought the EA handbook should represent the balance of âfull timeâ attention.
I doubt this should be the relevant metric of how to divvy-up space in the EA handbook. It also seems unclear how clear considerations of representation play in selecting content, or if so what is the key community to proportionately represent.
Yet I think Iâd be surprised if it wasnât the case that among those working âinâ EA, the majority work on the far future, and a plurality work on AI. It also agrees with my impression that the most involved in the EA community strongly skew towards the far future cause area in general and AI in particular. I think they do so, bluntly, because these people have better access to the balance of reason, which in fact favours these being the most important things to work on.
I think this methodology is pretty suspicious. There are more ways to be a full-time EA (FTEA) that working at an EA org, or even E2Ging. Suppose someone spends their time working on, say, poverty out of an desire to do the most good, and thus works at a development NGO or for a governent. Neither development NGOs nor governments will count as an âEA orgâ on your definition because they wonât being posting updates to the EA newsletter. Why would they? The EA community has very little comparative advantage in solving poverty, so what we be the point in say, Oxfam or DFID sending update reports to the EA newsletter? It would frankly be bizarre for a government department to update the EA community. We might say âah, put people who work on poverty arenât really EAsâ but that would just beg the question.
I think your list undercounts the number of animal-focused EAs. For example, it excludes Sentience Politics, which provided updates through the EA newsletter in September 2016, January 2017, and July 2017. It also excludes the Good Food Institute, an organization which describes itself as âfounded to apply the principles of effective altruism (EA) to change our food system.â While GFI does not provide updates through the EA newsletter, its job openings are mentioned in the December 2017, January 2018, and March 2018 newsletters. Additionally, it excludes organizations like the Humane League, which while not explicitly EA, have been described as having a âlargely utilitarian worldview.â Though the Humane League does not provide updates through the EA newsletter, its job openings are mentioned in the April 2017 newsletters, February 2018, and March 2018.
Perhaps the argument for excluding GFI and the Humane League (while including direct work organizations in the long term future space) is that relatively few people in direct work animal organizations identify as EAs (while most people in direct work long term future organizations identify as EA). If this is the reason, I think itâd be good for someone to provide evidence for it. Also, if the idea behind this method of counting is to look at the revealed preference of EAs, then I think people earning to give have to be included, especially since earning to give appears to be more useful for farm animal welfare than for long term future causes.
(Most of the above also applies to global health organizations.)
I picked the âupdatesâ purely in the interests of time (easier to skim), that it gives some sense of what orgs are considered âEA orgsâ rather than âorgs doing EA workâ (a distinction which I accept is imprecise: would a GW top charity âcountâ?), and I (forlornly) hoped pointing to a method, however brief, would forestall suspicion about cherry-picking.
I meant the quick-and-dirty data gathering to be more an indicative sample than a census. Iâd therefore expect significant margin of error (but not so significant as to change the bottom line). Other relevant candidate groups are also left out: BERI, Charity Science, Founderâs Pledge, ?ALLFED. Iâd expect there are more.
I think while this headcount is not a good metric how to allocate space in the EA handbook, it is a quite valuable overview in itself!
Just as a caveat, the numbers should not be directly compared to numbers from EA survey, as the later included also cause-prioritization, rationality, meta, politics & more.
(Using such cathegories, some organizations would end up in classified in different boxes)