Just to add a bit of info: I helped with THINK when I was a college student. It wasn’t the most effective strategy (largely, it was founded before we knew people would coalesce so strongly into the EA identity, and we didn’t predict that), but Leverage’s involvement with it was professional and thoughtful. I didn’t get any vibes of cultishness from my time with THINK, though I did find Connection Theory a bit weird and not very useful when I learned about it.
Jacy
I get it pretty frequently from newcomers (maybe in the top 20 questions for animal-focused EA?), but everyone seems convinced by a brief explanation of how there’s still a small chance of big purchasing changes even if every small consumption change doesn’t always lead to a purchasing change.
Exactly. Let me know if this doesn’t resolve things, zdgroff.
Yes, terraforming is a big way in which close-to-WAS scenarios could arise. I do think it’s smaller in expectation than digital environments that develop on their own and thus are close-to-WAS.
I don’t think terraforming would be done very differently than today’s wildlife, e.g. done without predation and diseases.
Ultimately I still think the digital, not-close-to-WAS scenarios seem much larger in expectation.
I’d qualify this by adding that the philosophical-type reflection seems to lead in expectation to more moral value (positive or negative, e.g. hedonium or dolorium) than other forces, despite overall having less influence than those other forces.
Thanks for commenting, Lukas. I think Lukas, Brian Tomasik, and others affiliated with FRI have thought more about this, and I basically defer to their views here, especially because I haven’t heard any reasonable people disagree with this particular point. Namely, I agree with Lukas that there seems to be an inevitable tradeoff here.
I just took it as an assumption in this post that we’re focusing on the far future, since I think basically all the theoretical arguments for/against that have been made elsewhere. Here’s a good article on it. I personally mostly focus on the far future, though not overwhelmingly so. I’m at something like 80% far future, 20% near-term considerations for my cause prioritization decisions.
This may take a few decades, but social change might take even longer.
To clarify, the post isn’t talking about ending factory farming. And I don’t think anyone in the EA community thinks we should try to end factory farming without technology as an important component. Though I think there are good reasons for EAs to focus on the social change component, e.g. there is less for-profit interest in that component (most of the tech money is from for-profit companies, so it’s less neglected in this sense).
Hm, yeah, I don’t think I fully understand you here either, and this seems somewhat different than what we discussed via email.
My concern is with (2) in your list. “[T]hey do not wish to be convinced to expand their moral circle” is extremely ambiguous to me. Presumably you mean they—without MCE advocacy being done—wouldn’t put in wide-MC* values or values that lead to wide-MC into an aligned AI. But I think it’s being conflated with, “they actively oppose” or “they would answer ‘no’ if asked, ‘Do you think your values are wrong when it comes to which moral beings deserve moral consideration?’”
I think they don’t actively oppose it, they would mostly answer “no” to that question, and it’s very uncertain if they will put the wide-MC-leading values into an aligned AI. I don’t think CEV or similar reflection processes reliably lead to wide moral circles. I think they can still be heavily influenced by their initial set-up (e.g. what the values of humanity when reflection begins).
This leads me to think that you only need (2) to be true in a very weak sense for MCE to matter. I think it’s quite plausible that this is the case.
*Wide-MC meaning an extremely wide moral circle, e.g. includes insects, small/weird digital minds.
I personally don’t think WAS is as similar to the most plausible far future dystopias, so I’ve been prioritizing it less even over just the past couple of years. I don’t expect far future dystopias to involve as much naturogenic (nature-caused) suffering, though of course it’s possible (e.g. if humans create large numbers of sentient beings in a simulation, but then let the simulation run on its own for a while, then the simulation could come to be viewed as naturogenic-ish and those attitudes could become more relevant).
I think if one wants something very neglected, digital sentience advocacy is basically across-the-board better than WAS advocacy.
That being said, I’m highly uncertain here and these reasons aren’t overwhelming (e.g. WAS advocacy pushes on more than just the “care about naturogenic suffering” lever), so I think WAS advocacy is still, in Gregory’s words, an important part of the ‘far future portfolio.’ And often one can work on it while working on other things, e.g. I think Animal Charity Evaluators’ WAS content (e.g. ]guest blog post by Oscar Horta](https://animalcharityevaluators.org/blog/why-the-situation-of-animals-in-the-wild-should-concern-us/)) has helped them be more well-rounded as an organization, and didn’t directly trade off with their farmed animal content.
- Jun 7, 2019, 11:57 PM; 5 points) 's comment on A vision for anthropocentrism to supplant wild animal suffering by (
Those considerations make sense. I don’t have much more to add for/against than what I said in the post.
On the comparison between different MCE strategies, I’m pretty uncertain which are best. The main reasons I currently favor farmed animal advocacy over your examples (global poverty, environmentalism, and companion animals) are that (1) farmed animal advocacy is far more neglected, (2) farmed animal advocacy is far more similar to potential far future dystopias, mainly just because it involves vast numbers of sentient beings who are largely ignored by most of society. I’m not relatively very worried about, for example, far future dystopias where dog-and-cat-like-beings (e.g. small, entertaining AIs kept around for companionship) are suffering in vast numbers. And environmentalism is typically advocating for non-sentient beings, which I think is quite different than MCE for sentient beings.
I think the better competitors to farmed animal advocacy are advocating broadly for antispeciesism/fundamental rights (e.g. Nonhuman Rights Project) and advocating specifically for digital sentience (e.g. a larger, more sophisticated version of People for the Ethical Treatment of Reinforcement Learners). There are good arguments against these, however, such as that it would be quite difficult for an eager EA to get much traction with a new digital sentience nonprofit. (We considered founding Sentience Institute with a focus on digital sentience. This was a big reason we didn’t.) Whereas given the current excitement in the farmed animal space (e.g. the coming release of “clean meat,” real meat grown without animal slaughter), the farmed animal space seems like a fantastic place for gaining traction.
I’m currently not very excited about “Start a petting zoo at Deepmind” (or similar direct outreach strategies) because it seems like it would produce a ton of backlash because it seems too adversarial and aggressive. There are additional considerations for/against (e.g. I worry that it’d be difficult to push a niche demographic like AI researchers very far away from the rest of society, at least the rest of their social circles; I also have the same traction concern I have with advocating for digital sentience), but this one just seems quite damning.
The upshot is that, even if there are some particularly high yield interventions in animal welfare from the far future perspective, this should be fairly far removed from typical EAA activity directed towards having the greatest near-term impact on animals. If this post heralds a pivot of Sentience Institute to directions pretty orthogonal to the principal component of effective animal advocacy, this would be welcome indeed.
I agree this is a valid argument, but given the other arguments (e.g. those above), I still think it’s usually right for EAAs to focus on farmed animal advocacy, including Sentience Institute at least for the next year or two.
(FYI for readers, Gregory and I also discussed these things before the post was published when he gave feedback on the draft. So our comments might seem a little rehearsed.)
Thanks! That’s very kind of you.
I’m pretty uncertain about the best levers, and I think research can help a lot with that. Tentatively, I do think that MCE ends up aligning fairly well with conventional EAA (perhaps it should be unsurprising that the most important levers to push on for near-term values are also most important for long-term values, though it depends on how narrowly you’re drawing the lines).
A few exceptions to that:
Digital sentience probably matters the most in the long run. There are good reasons to be skeptical we should be advocating for this now (e.g. it’s quite outside of the mainstream so it might be hard to actually get attention and change minds; it’d probably be hard to get funding for this sort of advocacy (indeed that’s one big reason SI started with farmed animal advocacy)), but I’m pretty compelled by the general claim, “If you think X value is what matters most in the long-term, your default approach should be working on X directly.” Advocating for digital sentience is of course neglected territory, but Sentience Institute, the Nonhuman Rights Project, and Animal Ethics have all worked on it. People for the Ethical Treatment of Reinforcement Learners has been the only dedicated organization AFAIK, and I’m not sure what their status is or if they’ve ever paid full-time or part-time staff.
I think views on value lock-in matter a lot because of how they affect food tech (e.g. supporting The Good Food Institute). I place significant weight on this and a few other things (see this section of an SI page) that make me think GFI is actually a pretty good bet, despite my concern that technology progresses monotonically.
Because what might matter most is society’s general concern for weird/small minds, we should be more sympathetic to indirect antispeciesism work like that done by Animal Ethics and the fundamental rights work of the Nonhuman Rights Project. From a near-term perspective, I don’t think these look very good because I don’t think we’ll see fundamental rights be a big reducer of factory farm suffering.
This is a less-refined view of mine, but I’m less focused than I used to be on wild animal suffering. It just seems to cost a lot of weirdness points, and naturogenic suffering doesn’t seem nearly as important as anthropogenic suffering in the far future. Factory farm suffering seems a lot more similar to far future dystopias than does wild animal suffering, despite WAS dominating utility calculations for the next, say, 50 years.
I could talk more about this if you’d like, especially if you’re facing specific decisions like where exactly to donate in 2018 or what sort of job you’re looking for with your skillset.
I’m sympathetic to both of those points personally.
1) I considered that, and in addition to time constraints, I know others haven’t written on this because there’s a big concern of talking about it making it more likely to happen. I err more towards sharing it despite this concern, but I’m pretty uncertain. Even the detail of this post was more than several people wanted me to include.
But mostly, I’m just limited on time.
2) That’s reasonable. I think all of these boundaries are fairly arbitrary; we just need to try to use the same standards across cause areas, e.g. considering only work with this as its explicit focus. Theoretically, since Neglectedness is basically just a heuristic to estimate how much low-hanging fruit there is, we’re aiming at “The space of work that might take such low-hanging fruit away.” In this sense, Neglectedness could vary widely. E.g. there’s limited room for advocating (e.g. passing out leaflets, giving lectures) directly to AI researchers, but this isn’t affected much by advocacy towards the general population.
I do think moral philosophy that leads to expanding moral circles (e.g. writing papers supportive of utiltiarianism), moral-circle-focused social activism (e.g. anti-racism, not as much something like campaigning for increased arts funding that seems fairly orthogonal to MCE), and EA outreach (in the sense that the A of EA means a wide moral circle) are MCE in the broadest somewhat-useful definition.
Caspar’s blog post is a pretty good read on the nuances of defining/utilizing Neglectedness.
That makes sense. If I were convinced hedonium/dolorium dominated to a very large degree, and that hedonium was as good as dolorium is bad, I would probably think the far future was at least moderately +EV.
Agreed.
Yeah, I think that’s basically right. I think moral circle expansion (MCE) is closer to your list items than extinction risk reduction (ERR) is because MCE mostly competes in the values space, while ERR mostly competes in the technology space.
However, MCE is competing in a narrower space than just values. It’s in the MC space, which is just the space of advocacy on what our moral circle should look like. So I think it’s fairly distinct from the list items in that sense, though you could still say they’re in the same space because all advocacy competes for news coverage, ad buys, recruiting advocacy-oriented people, etc. (Technology projects could also compete for these things, though there are separations, e.g. journalists with a social beat versus journalists with a tech beat.)
I think the comparably narrow space of ERR is ER, which also includes people who don’t want extinction risk reduced (or even want it increased), such as some hardcore environmentalists, antinatalists, and negative utilitarians.
I think these are legitimate cooperation/coordination perspectives, and it’s not really clear to me how they add up. But in general, I think this matters mostly in situations where you actually can coordinate. For example, in the US general election when Democrats and Republicans come together and agree not to give to their respective campaigns (in exchange for their counterpart also not doing so). Or if there were anti-MCE EAs with whom MCE EAs could coordinate (which I think is basically what you’re saying with “we’d be better off if they both decided to spend the money on anti-malaria bednets”).
Thanks for the comment! A few of my thoughts on this:
Presumably we want some people working on both of these problems, some people have skills more suited to one than the other, and some people are just going to be more passionate about one than the other.
If one is convinced non-extinction civilization is net positive, this seems true and important. Sorry if I framed the post too much as one or the other for the whole community.
Much of the work related to AIA so far has been about raising awareness about the problem (eg the book Superintelligence), and this is more a social solution than a technical one.
Maybe. My impression from people working on AIA is that they see it as mostly technical, and indeed they think much of the social work has been net negative. Perhaps not Superintelligence, but at least the work that’s been done to get media coverage and widespread attention without the technical attention to detail of Bostrom’s book.
I think the more important social work (from a pro-AIA perspective) is about convincing AI decision-makers to use the technical results of AIA research, but my impression is that AIA proponents still think getting those technical results is probably the more important projects.
There’s also social work in coordinating the AIA community.
First, I expect clean meat will lead to the moral circle expanding more to animals. I really don’t see any vegan social movement succeeding in ending factory farming anywhere near as much as I expect clean meat to.
Sure, though one big issue with technology is that it seems like we can do far less to steer its direction than we can do with social change. Clean meat tech research probably just helps us get clean meat sooner instead of making the tech progress happen when it wouldn’t otherwise. The direction of the far future (e.g. whether clean meat is ever adopted, whether the moral circle expands to artificial sentience) probably matters a lot more than the speed at which it arrives.
Of course, this gets very complicated very quickly, as we consider things like value lock-in. Sentience Institute has a bit of basic sketching on the topic on this page.
Second, I’d imagine that a mature science of consciousness would increase MCE significantly. Many people don’t think animals are conscious, and almost no one thinks anything besides animals can be conscious
I disagree that “many people don’t think animals are conscious.” I almost exclusively hear that view in from the rationalist/LessWrong community. A recent survey suggested that 87.3% of US adults agree with the statement, “Farmed animals have roughly the same ability to feel pain and discomfort as humans,” and presumably even more think they have at least some ability.
Advanced neurotechnologies could change that—they could allow us to potentially test hypotheses about consciousness.
I’m fairly skeptical of this personally, partly because I don’t think there’s a fact of the matter when it comes to whether a being is conscious. I think Brian Tomasik has written eloquently on this. (I know this is an unfortunate view for an animal advocate like me, but it seems to have the best evidence favoring it.)
Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment
I’d go farther here and say all three (global poverty, animal rights, and far future) are best thought of as target populations rather than cause areas. Moreover, the space not covered by these three is basically just wealthy modern humans, which seems to be much less of a treasure trove than the other three because WMHs have the most resources, far more than the other three populations. (Potentially there’s also medium-term future beings as a distinct population, depending on where we draw the lines.)
I think EA would probably be discovering more things if we were focused on looking not for new cause areas but for new specific intervention areas, comparable to individual health support for the global poor (e.g. antimalarial nets, deworming pills), individual financial help for the global poor (e.g. unconditional cash transfers), individual advocacy of plant-based eating (e.g. leafleting, online ads), institutional farmed animal welfare reforms (e.g. cage-free eating), technical AI safety research, and general extinction risk policy work.
If we think of the EA cause area landscape in “intervention area” terms, there seems to be a lot more change happening.
Thanks for the response. My main general thought here is just that we shouldn’t depend on so much from the reader. Most people, even most thoughtful EAs, won’t read in full and come up with all the qualifications on their own, so it’s important for article writers to include those themselves, and to include those upfront and center in their articles.
If you wanted to spend a lot of time on “what causes do EA leadership favor,” one project I see as potentially really valuable is a list of arguments/evidence and getting EA leaders to vote on their weights. Sort of a combination of 80k’s quantitative cause assessment and this survey. I think this is a more ideal peer-belief-aggregation because it reduces the effects of dependence. Like if Rob and Jacy both prioritize the far future entirely because of Bostrom’s calculation of how many beings could exist in it, then we’d come up with that single argument having a high weight, rather than two people highly favoring the far future. We might try this approach at Sentience Institute at some point, though right now we’re more focused on just coming up with the lists of arguments/evidence in the field of moral circle expansion, so instead we did something more like your 2017 survey of researchers in this field. (Specifically, we would have researchers rate the pieces of evidence listed on this page: https://www.sentienceinstitute.org/foundational-questions-summaries)
That’s probably not the best approach, but I’d like a survey approach that somehow tries to minimize the dependence effect. A simpler version would be to just ask for people’s opinions but them have them rate how much they’re basing their views on the views of their peers, or just ask for their view and confidence while pretending like they’ve never heard peer views, but this sort of approach seems more vulnerable to bias than the evidence-rating method.
Anyway, have fun at EAG London! Curious if anything that happens there really surprises you.
- Dec 2, 2019, 10:15 PM; 23 points) 's comment on EA Leaders Forum: Survey on EA priorities (data and analysis) by (
The content on Felicifia.org was most important in my first involvement, though that website isn’t active anymore. I feel like forum content (similar to what could be on the EA Forum!) was important because it’s casually written and welcoming. Everyone was working together on the same problems and ideas, so I felt eager to join.