I’m pretty uncertain about the best levers, and I think research can help a lot with that. Tentatively, I do think that MCE ends up aligning fairly well with conventional EAA (perhaps it should be unsurprising that the most important levers to push on for near-term values are also most important for long-term values, though it depends on how narrowly you’re drawing the lines).
A few exceptions to that:
Digital sentience probably matters the most in the long run. There are good reasons to be skeptical we should be advocating for this now (e.g. it’s quite outside of the mainstream so it might be hard to actually get attention and change minds; it’d probably be hard to get funding for this sort of advocacy (indeed that’s one big reason SI started with farmed animal advocacy)), but I’m pretty compelled by the general claim, “If you think X value is what matters most in the long-term, your default approach should be working on X directly.”
Advocating for digital sentience is of course neglected territory, but Sentience Institute, the Nonhuman Rights Project, and Animal Ethics have all worked on it. People for the Ethical Treatment of Reinforcement Learners has been the only dedicated organization AFAIK, and I’m not sure what their status is or if they’ve ever paid full-time or part-time staff.
I think views on value lock-in matter a lot because of how they affect food tech (e.g. supporting The Good Food Institute). I place significant weight on this and a few other things (see this section of an SI page) that make me think GFI is actually a pretty good bet, despite my concern that technology progresses monotonically.
Because what might matter most is society’s general concern for weird/small minds, we should be more sympathetic to indirect antispeciesism work like that done by Animal Ethics and the fundamental rights work of the Nonhuman Rights Project. From a near-term perspective, I don’t think these look very good because I don’t think we’ll see fundamental rights be a big reducer of factory farm suffering.
This is a less-refined view of mine, but I’m less focused than I used to be on wild animal suffering. It just seems to cost a lot of weirdness points, and naturogenic suffering doesn’t seem nearly as important as anthropogenic suffering in the far future. Factory farm suffering seems a lot more similar to far future dystopias than does wild animal suffering, despite WAS dominating utility calculations for the next, say, 50 years.
I could talk more about this if you’d like, especially if you’re facing specific decisions like where exactly to donate in 2018 or what sort of job you’re looking for with your skillset.
This post is extremely valuable—thank you! You have caused me to reexamine my views about the expected value of the far future.
What do you think are the best levers for expanding the moral circle, besides donating to SI? Is there anything else outside of conventional EAA?
Thanks! That’s very kind of you.
I’m pretty uncertain about the best levers, and I think research can help a lot with that. Tentatively, I do think that MCE ends up aligning fairly well with conventional EAA (perhaps it should be unsurprising that the most important levers to push on for near-term values are also most important for long-term values, though it depends on how narrowly you’re drawing the lines).
A few exceptions to that:
Digital sentience probably matters the most in the long run. There are good reasons to be skeptical we should be advocating for this now (e.g. it’s quite outside of the mainstream so it might be hard to actually get attention and change minds; it’d probably be hard to get funding for this sort of advocacy (indeed that’s one big reason SI started with farmed animal advocacy)), but I’m pretty compelled by the general claim, “If you think X value is what matters most in the long-term, your default approach should be working on X directly.” Advocating for digital sentience is of course neglected territory, but Sentience Institute, the Nonhuman Rights Project, and Animal Ethics have all worked on it. People for the Ethical Treatment of Reinforcement Learners has been the only dedicated organization AFAIK, and I’m not sure what their status is or if they’ve ever paid full-time or part-time staff.
I think views on value lock-in matter a lot because of how they affect food tech (e.g. supporting The Good Food Institute). I place significant weight on this and a few other things (see this section of an SI page) that make me think GFI is actually a pretty good bet, despite my concern that technology progresses monotonically.
Because what might matter most is society’s general concern for weird/small minds, we should be more sympathetic to indirect antispeciesism work like that done by Animal Ethics and the fundamental rights work of the Nonhuman Rights Project. From a near-term perspective, I don’t think these look very good because I don’t think we’ll see fundamental rights be a big reducer of factory farm suffering.
This is a less-refined view of mine, but I’m less focused than I used to be on wild animal suffering. It just seems to cost a lot of weirdness points, and naturogenic suffering doesn’t seem nearly as important as anthropogenic suffering in the far future. Factory farm suffering seems a lot more similar to far future dystopias than does wild animal suffering, despite WAS dominating utility calculations for the next, say, 50 years.
I could talk more about this if you’d like, especially if you’re facing specific decisions like where exactly to donate in 2018 or what sort of job you’re looking for with your skillset.