As someone in the EAA space, I’m curious how much value EAA movement-building brings relative to general animal advocacy movement-building. ACE has cited the latter as neglected. In general I think EAA movement-building may be somewhat narrower because of the conjunction of beliefs (animal advocacy and EA), which I would think makes it less tractable and potentially lower in scale.
I would think any direct impact would be quite small, as the products are all aimed at substituting for cow products now, and cows are basically a rounding error in the number of farmed animals. It seems like the main impact of Beyond Meat is to drive the plant-based industry further and to establish itself to be able to produce higher-impact replacements down the line.
One question I have about these discussions is that I’d read some arguments back when I took a class on environmental science that humanity was near the earth’s carrying capacity, meaning there is not capacity on earth for a much larger population (and capacity is quite likely smaller). This term “carrying capacity” seems like a sketchy one that tacitly packs in normative and positive judgments, so I don’t endorse it, but is there a chance that something like this is true, and so lengthening life will reduce the probability of others being born because it raises the probability of environmental problems that lower the sustainable population?
Wow, this is incredible. Such a great write-up and so much here. Two questions:
1) Do you think there is a consensus that Jon Mallatt and Todd Feinberg are among the leading experts and their books among the best on the subject? Just trying to figure out how much to update based on this.
2) Jon Mallatt seems from the way he talks like the type of biologist who could be amenable to wild -animal welfare research. Has anyone reached out to him?
In general, I think the degree of compliance with any social norm one typically observes should be surprising. I’ve long thought it’s remarkable that people intent on harming others do not use cars as weapons of destruction as much as they have recently. So I think there’s something disturbing and something encouraging about this, in that we see lots of facile ways to hurt others be far rarer than we would expect in the presence of perfect information.
I’m getting married in September and November (he’s Brazilian, and we want to celebrate in the U.S. and Brazil). Mostly following out of interest, but some things we’ve thought of:
1) I’m interested in doing a giving game in the bags we leave for guests at the hotel. Guests can vote on where to give money from a list of charities and descriptions.
2) Obviously try to direct gifts to donations.
3) We’re animal advocates, and we got officiants on board with that and who will probably talk about all sentient beings in addition to our vows.
This is very interesting to see/hear. I have a paper coming out that’s purely theoretical but that deals with this issue, and I’d be interested in talking more about this spreadsheet.
Very excited to see where this goes and, I hope, participate in it!
I’d never heard of this center and find this work really interesting! Do you think deliberative polling in the context of values could be a way of getting some idea of where coherent extrapolated volition would go?
When I bring this up with EAs who are focused on AI safety, many of them suggest that we only need to get AI safety right and then the AI can solve the question of what consciousness is.
I find this somewhat frustrating. Obviously there’s a range of views in the EA community on this issue, but I think the most plausible arguments for focusing on AI safety are that there is a low but non-negligible chance of a huge impact. If that’s true, then “getting AI safety right” leaves a lot of things unaddressed, because in most scenarios “getting AI safety” right is only a small portion of the picture. In general I think we need to find ways to hold two thoughts at the same time, that AI safety is critical and that there’s a very significant chance of other things mattering too.
I’m curious what “wild rat” means—does this include rats that live in cities and enter into apartments? If not, did you consider mice and rats (and other animals) killed in traps by humans? I know a lot of traps are quite awful—poison or sticky traps that let them die by starvation—so I thought it was possible that this would be a priority category.
This is indescribably awesome.
It seems like the reasons listed why organizations who value talent highly aren’t hiring center on a growth constraint that can’t be alleviated by money or talent. If there is this growth constraint, then doesn’t it just mean we should focus elsewhere, i.e. by doing activities independent of these organizations? It seems like if organizations have slightly more room for talent than money, but ultimately little room for either, then their relative preference between the two shouldn’t matter much, no?
As I noted on the original post, I am grateful this dialogue is happening so respectfully this time around.
I’m grateful to see this dialogue being had so respectfully and am grateful to both sides for this dialogue.