Good points! This is exactly the sort of work we do at Sentience Institute on moral circle expansion (mostly for farmed animals from 2016 to 2020, but since late 2020, most of our work has been directly on AI—and of course the intersections), and it has been my priority since 2014. Also, Peter Singer and Yip Fai Tse are working on “AI Ethics: The Case for Including Animals”; there are a number of EA Forum posts on nonhumans and the long-term future; and the harms of AI and “smart farming” for farmed animals is a common topic, such as this recent article that I was quoted in. My sense from talking to many people in this area is that there is substantial room for more funding; we’ve gotten some generous support from EA megafunders and individuals, but we also consistently get dozens of highly qualified applicants whom we have to reject every hiring round, including people with good ideas for new projects.
Jacy
The Future Might Not Be So Great
Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment
Meta AI announces Cicero: Human-Level Diplomacy play (with dialogue)
Why Animals Matter for Effective Altruism
“EA’s are, despite our commitments to ethical behaviour, perhaps no more trustworthy with power than anyone else.”
I wonder if “perhaps no more trustworthy with power than anyone else” goes a little too far. I think the EA community made mistakes that facilitated FTX misbehavior, but that is only one small group of people. Many EAs have substantial power in the world and have continued to be largely trustworthy (and thus less newsworthy!), and I think we have evidence like our stronger-than-average explicit commitments to use power for good and the critical reflection happening in the community right now suggests we are probably doing better than average—even though, as you rightly point out, we’re far from perfect.
I strongly agree with this. In particular, it seems that the critiques of EA in relation to these events are much less focused on the recent fraud concern than EAs are in their defenses. I think we are choosing the easiest thing to condemn and distance ourselves from, in a very concerning way. Deliberately or not, our focus on the outrage against recent fraud distracts onlookers and community members from the more serious underlying concerns that weigh more heavily on our behavior given their likelihood.
The 2 most pressing to me are the possibilities (i) that EAs knew about serious concerns with FTX based on major events in ~2017-2018, as recently described by Kerry Vaughan and others, as well as more recent concerns, and (ii) that EAs acted as if we had tens of billions committed for our projects even though many of us knew that money was held by FTX and FTX-affiliated entities, in particular FTX Token (FTT), a very fragile, illiquid asset that could arguably never be sold at anywhere close to current market value and arguably makes statements of tens of billions based on market value unjustified and misleading .
[Edit: Just to be clear, I’m not referring to leverage or fraud with point (ii); I know this is controversial! Milan now raises these same two concerns in a more amenable way here: https://forum.effectivealtruism.org/posts/WdeiPrwgqW2wHAxgT/a-personal-statement-on-ftx?commentId=3ZNGqJEpQSrDuRpSu]
Some considerations for different ways to reduce x-risk
Those considerations make sense. I don’t have much more to add for/against than what I said in the post.
On the comparison between different MCE strategies, I’m pretty uncertain which are best. The main reasons I currently favor farmed animal advocacy over your examples (global poverty, environmentalism, and companion animals) are that (1) farmed animal advocacy is far more neglected, (2) farmed animal advocacy is far more similar to potential far future dystopias, mainly just because it involves vast numbers of sentient beings who are largely ignored by most of society. I’m not relatively very worried about, for example, far future dystopias where dog-and-cat-like-beings (e.g. small, entertaining AIs kept around for companionship) are suffering in vast numbers. And environmentalism is typically advocating for non-sentient beings, which I think is quite different than MCE for sentient beings.
I think the better competitors to farmed animal advocacy are advocating broadly for antispeciesism/fundamental rights (e.g. Nonhuman Rights Project) and advocating specifically for digital sentience (e.g. a larger, more sophisticated version of People for the Ethical Treatment of Reinforcement Learners). There are good arguments against these, however, such as that it would be quite difficult for an eager EA to get much traction with a new digital sentience nonprofit. (We considered founding Sentience Institute with a focus on digital sentience. This was a big reason we didn’t.) Whereas given the current excitement in the farmed animal space (e.g. the coming release of “clean meat,” real meat grown without animal slaughter), the farmed animal space seems like a fantastic place for gaining traction.
I’m currently not very excited about “Start a petting zoo at Deepmind” (or similar direct outreach strategies) because it seems like it would produce a ton of backlash because it seems too adversarial and aggressive. There are additional considerations for/against (e.g. I worry that it’d be difficult to push a niche demographic like AI researchers very far away from the rest of society, at least the rest of their social circles; I also have the same traction concern I have with advocating for digital sentience), but this one just seems quite damning.
The upshot is that, even if there are some particularly high yield interventions in animal welfare from the far future perspective, this should be fairly far removed from typical EAA activity directed towards having the greatest near-term impact on animals. If this post heralds a pivot of Sentience Institute to directions pretty orthogonal to the principal component of effective animal advocacy, this would be welcome indeed.
I agree this is a valid argument, but given the other arguments (e.g. those above), I still think it’s usually right for EAAs to focus on farmed animal advocacy, including Sentience Institute at least for the next year or two.
(FYI for readers, Gregory and I also discussed these things before the post was published when he gave feedback on the draft. So our comments might seem a little rehearsed.)
Introducing Sentience Institute
This is great data to have! Thanks for collecting and sharing it. I think the Sioux Falls (Metaculus underestimate of the 48% ban support) and Swiss (Metaculus overestimate of the 37% ban support) factory farming ban proposals are particularly interesting opportunities to connect this survey data to policy results. I’ll share a few scattered, preliminary thoughts to spark discussion, and I hope to see more work on this topic in the future.
These 2022 results seem to be in line with the very similar surveys conducted by Rethink Priorities in 2019, which I found very useful, but I don’t know if those results have been shared publicly. Will you be sharing that data too? I know it’s been eagerly anticipated, and Sentience Institute has held off on similar work while waiting for it. I’m not sure if that 2019 data is now seen as just a pilot for this 2022 data collection?
In addition to 2017 and 2020, Sentience Institute asked these questions in the 2019 and 2021 Animals, Food, and Technology (AFT) surveys with similar results.
In 2017, we preregistered credible intervals and informally solicited estimates from many others. The data was surprisingly ban-supporting relative to priors, which may be a more important takeaway than any post-hoc explanation. I didn’t preregister any CIs for the 2019 or 2022 RP results. I think these drops in ban support are around what I’d expect, but it’s very hard to say in hindsight, especially with other variation (e.g., different outcome scales, presumably different samples).
The Sentience Institute AFT survey also has questions with pro/con information, e.g., “Some people think that we should ban all animal farming and transition to plant-based and cultured foods, to reduce harm to humans and animals. Others think that we should keep using animals for food, to provide the conventional meat consumers are used to eating. Where would you place yourself on this scale?” (wording based on the GSS). That wording seems to elicit much stronger ban support than this new wording (though take with a large grain of salt due to other variation in the surveys), which seems to make sense as it is much more ban-supporting than the ban-opposing “it is wrong to kill animals” and “right to eat meat if they choose” wordings. Concretely, on a 1–6 support scale, we found a mean of 4.12 (95% CI: 4.04–4.21) for “ban all animal farming” with our nationally representative sample in 2021. I think it’s fair to say that’s much higher despite also having pro/con information, and I think it’s important qualification for interpreting the 2022 RP results that people may miss out on in this post.
Social scientists have long asked “Is there really any such thing as public opinion?” (Lewis 1939), and I think the majority answer has been some version of “public opinion does not exist” (e.g., Blumer 1948, Bourdieu 1972): There are many interesting wordings to consider: simple vs complex, ban-supporting vs ban-opposing, socially desirable and acquiescing vs socially undesirable and anti-acquiescing, politically left vs right favored, financially incentivized, politically engaged, etc. All question wordings matter, and none are objectively correct or objectively biased. I think we may disagree on this point because you say some question wordings “are biased towards answering “Yes”...”, though you may mean some subjective standard of bias, such as distance from likely counterfactual ballot measure results. Some wordings more naturally jibe with what people have in mind when they see survey results—I prioritize simple wordings in part for this reason—but ideally we share the exact survey wording alongside percentages or scores whenever possible to ensure that clarity.
If you have time, what was the sample (M Turk, Prolific, Civis Analytics, Ipsos Omnibus, Knowledge Panel, etc.), what were the demographics, and was it weighted for representativeness, and if so, how?
What exactly do you mean by “strong” in “strong basis for more radical action”? One operationalization I like is: All things considered, I think these survey and ballot results should update the marginal farmed animal advocate towards more radical approaches relative to their prior. I’d love to know if you agree.
Well-done again on this very interesting work! [minor edits made to this comment for clarity and fixing typos]
Why EA events should be (at least) vegetarian
[Disclaimer: Rob, 80k’s Director of Research, and I briefly chatted about this on Facebook, but I want to make a comment here because that post is gone and more people will see it here. Also, as a potential conflict-of-interest, I took the survey and work at an organization that’s between the animal and far future cause areas.]
This is overall really interesting, and I’m glad the survey was done. But I’m not sure how representative of EA community leaders it really is. I’d take the cause selection section in particular with a big grain of salt, and I wish it were more heavily qualified and discussed in different language. Of the organizations surveyed and number surveyed per organization, my personal count is that 14 were meta, 12.5 were far future, 3 were poverty, and 1.5 were animal. My guess is that a similar distribution holds for the 5 unaffiliated respondents. So it should be no surprise to readers that meta and far future work were most prioritized.* **
I think we shouldn’t call this a general survey of EA leadership (e.g. the title of the post) when it’s so disproportionate. I think the inclusion of more meta organization makes sense, but there are poverty groups like the Against Malaria Foundation and Schistosomiasis Control Initiative, as well as animal groups like The Good Food Institute and The Humane League, that seem to meet the same bar for EA-ness as the far future groups included like CSER and MIRI.
Focusing heavily on far future organizations might be partly due to selecting only organizations founded after the EA community coalesced, and while that seems like a reasonable metric (among several possibilities), is also seems biased towards far future work because that’s a newer field and it’s at least the reasonable metric that conveniently syncs up with 80k’s cause prioritization views. Also, the ACE-recommended charity GFI was founded explicitly on the principle of effective altruism after EA coalesced. Their team says that quite frequently, and as far as I know, the leadership all identifies as EA. Perhaps you’re using a metric more like social ties to other EA leaders, but that’s exactly the sort of bias I’m worried about here.
Also, the EA community as a whole doesn’t seem to hold this cause prioritization view (http://effective-altruism.com/ea/1e5/ea_survey_2017_series_cause_area_preferences/). Leadership can of course deviate from the broad community, but this is just another reason to be cautious in weighing these results.
I think your note about this selection is fair
“the group surveyed included many of the most clever, informed and long-involved people in the movement,”
and I appreciate that you looked a little at cause prioritization for relatively-unbiased subsets
“Views were similar among people whose main research work is to prioritise different causes – none of whom rated Global Development as the most effective,”
“on the other hand, many people not working in long-term focussed organisations nonetheless rated it as most effective”
but it’s still important to note that you (Rob and 80k) personally favor these two areas strongly, which seems to create a big potential bias, and that we should be very cautious of groupthink in our community where updating based on the views of EA leaders is highly prized and recommended. I know the latter is a harder concern to get around with a survey, but I think it should have been noted in the report, ideally in the Key Figures section. And as I mentioned at the beginning, I don’t think this should be discussed as a general survey of EA leaders, at least not when it comes to cause prioritization.
This post certainly made me more worried personally that my prioritization of the far future could be more due to groupthink than I previously thought.
Here’s the categorization I’m using for organizations. It might be off, but it’s at least pretty close. ff = far future
80,000 Hours (3) meta AI Impacts (1) ff Animal Charity Evaluators (1) animal Center for Applied Rationality (2) ff Centre for Effective Altruism (3) meta Centre for the Study of Existential Risk (1) ff Charity Science: Health (1) poverty DeepMind (1) ff Foundational Research Institute (2) ff Future of Humanity Institute (3) ff GiveWell (2) poverty Global Priorities Institute (1) meta Leverage Research (1) meta Machine Intelligence Research Institute (2) ff Open Philanthropy Project (5) meta Rethink Charity (1) meta Sentience Institute (1) animal/ff Unaffiliated (5)
*The 80k post notes that not everyone filled out all the survey answers, e.g. GiveWell only had one person fill out the cause selection section.
**Assuming the reader has already seen other evidence, e.g. that CFAR only recently adopted a far future mission, or that people like Rob went from other cause areas towards a focus on the far future.
EA Interview Series: Michelle Hutchinson, December 2015
Thanks for posting on this important topic. You might be interested in this EA Forum post where I outlined many arguments against your conclusion, the expected value of extinction risk reduction being (highly) positive.
I do think your “very unlikely that [human descendants] would see value exactly where we see disvalue” argument is a viable one, but I think it’s just one of many considerations, and my current impression of the evidence is that it’s outweighed.
Also FYI the link in your article to “moral circle expansion” is dead. We work on that approach at Sentience Institute if you’re interested.
Just to add a bit of info: I helped with THINK when I was a college student. It wasn’t the most effective strategy (largely, it was founded before we knew people would coalesce so strongly into the EA identity, and we didn’t predict that), but Leverage’s involvement with it was professional and thoughtful. I didn’t get any vibes of cultishness from my time with THINK, though I did find Connection Theory a bit weird and not very useful when I learned about it.
2018 list of half-baked volunteer research ideas
I personally don’t think WAS is as similar to the most plausible far future dystopias, so I’ve been prioritizing it less even over just the past couple of years. I don’t expect far future dystopias to involve as much naturogenic (nature-caused) suffering, though of course it’s possible (e.g. if humans create large numbers of sentient beings in a simulation, but then let the simulation run on its own for a while, then the simulation could come to be viewed as naturogenic-ish and those attitudes could become more relevant).
I think if one wants something very neglected, digital sentience advocacy is basically across-the-board better than WAS advocacy.
That being said, I’m highly uncertain here and these reasons aren’t overwhelming (e.g. WAS advocacy pushes on more than just the “care about naturogenic suffering” lever), so I think WAS advocacy is still, in Gregory’s words, an important part of the ‘far future portfolio.’ And often one can work on it while working on other things, e.g. I think Animal Charity Evaluators’ WAS content (e.g. ]guest blog post by Oscar Horta](https://animalcharityevaluators.org/blog/why-the-situation-of-animals-in-the-wild-should-concern-us/)) has helped them be more well-rounded as an organization, and didn’t directly trade off with their farmed animal content.
- 7 Jun 2019 23:57 UTC; 5 points) 's comment on A vision for anthropocentrism to supplant wild animal suffering by (
Rather than further praising or critiquing the FTX/Alameda team, I want to flag my concern that the broader community, including myself, made a big mistake in the “too much money” discourse and subsequent push away from earning to give (ETG) and fundraising. People have discussed Open Philanthropy and FTX funding in a way that gives the impression that tens of billions are locked in for effective altruism, despite many EA nonprofits still insisting on their significant room for more funding. (There has been some pushback, and my impression that the “too much money” discourse has been more prevalent may not be representative.)
I’ve often heard the marginal ETG amount, at which point a normal EA employee should be ambivalent between EA employment and donating $X per year, at well above $1,000,000, and I see many working on megaproject ideas designed to absorb as much funding as possible. I think many would say that these choices make sense in a community with >$30 billion in funding, but not one with <$5 billion in funding, just as ballparks to put numbers on things. I think many of us are in fortunate positions to pivot quickly and safely, but for many, especially from underprivileged backgrounds, this collapse in funding would be a complete disenchantment. For some, it already has been. I hope we’ll be more cautious, skeptical, and humble in the future.
[Edit 2022-11-10: This comment started with “I’m grateful for and impressed by all the FTX/Alameda team has done, and”, which I intended as an extension of compassion in a tense situation and an acknowledgment that the people at FTX and Alameda have done great things for the less fortunate (e.g., their grants to date, choosing to earn to give in the first place), regardless of the current situation and any possible fraud or other serious misbehavior. I still think this is important, true, and often neglected in crisis, but it distracts from the point of this comment, so I’ve cut it from the top and noted that here. Everyone involved and affected has my deepest sympathy.]