I think you might have replied to the wrong comment.
Reading about Julia and Jeff was part of the reason I got into EA in the first place (I don’t remember if those were the particular articles). It wasn’t just the fact that they were donating a substantial fraction of their income, but also that they saw it as an obligation. At the time, I was going through an existential crisis; I felt guilt and shame for living selfishly while others suffered (and these feelings are still important motivators for me). EA was the solution I found, and I decided to try earning to give.
I’m a little surprised that the estimates for chickens and cows aren’t higher. Personally, I find evidence of complex and varied emotions to be very compelling, especially social emotions, e.g. play behaviour, emotional empathy/contagion, affection and social attachments to particular individuals (companionship), helping behaviour (altruism), parenting generally, separation anxiety and perhaps even something like grief. Also, possible emotional reactions of cattle to learning. :P
I would be comfortable using the word ‘love’ to describe the attachments chickens and cows often have towards others, although it may of course be quite different from an adult human’s experience of love, but perhaps not that different from an infant’s or toddler’s. It’s hard for me to imagine an individual capable of love like this not being sentient.
I suppose I also give weight to anecdotes and videos of individual animals, though.
How should we interpret ranges of probabilities here?
We can talk about confidence (credence) intervals for frequencies for the population we’re sampling from for polls and surveys. For species (or individuals) with characteristics of interest (possibly a feature or its absence) X1,X2,…,Xn, we could describe our probability distribution over the fraction of them that are sentient.
Another approach might be to try to quantify the sensitivity to new information, e.g. if we also observed another given capacity (or its absence), how much would our estimate change? If we model the probability that a species (or individual) will have a set X of characteristics of interest given a fixed set of observed characteristics, we could compute a credence interval for our posterior probability of sentience with X, over the distribution of X conditional on observed characteristics.
Are either of these what some of you had in mind, roughly (even if you didn’t actually calculate anthing)? Or something else?
I suppose many of the reasons I outline might be special cases of more generic reasons (especially for investing or donating to research), but it is worth pointing out what they look like in animal protection since it helps us weigh them more accurately. Some generic reasons might not apply to specific causes at all, and other generic reasons might be especially true for others.
I think 1-3 and 5 under giving now towards interventions are pretty specific to animal advocacy, although 5 applies to moral advocacy generally. I guess you could say 1 and 6 are special cases of the problem being solved eventually regardless, and 4 could be a consideration whenever there are incremental improvements.
I also just added a few more reasons which are fairly specific to animal protection in favour of giving later.
What criteria were used to decide which orgs/individuals should be invited? Should we consider leaders at EA-recommended orgs or orgs doing cost-effective work in EA cause areas, but not specifically EA-aligned (e.g. Gates Foundation?), too? (This was a concern raised about representativeness of the EA handbook. https://forum.effectivealtruism.org/posts/MQWAsdm8MSzYNkD9X/announcing-the-effective-altruism-handbook-2nd-edition#KR2uKZqSmno7ANTQJ)
Because of this, I don’t think it really makes sense to aggregate data over all cause areas. The inclusion criteria are likely to draw pretty arbitrary lines, and respondents will obviously tend to want to see more resources go to the causes they’re working on, and will differ in other ways significantly by cause area. If the proportions of people working in a given cause don’t match the proportion of EA funding people would like to see go to that cause, that is interesting, though, but we still can’t take much away from it.
It seems weird to me that DeepMind and the Good Food Institute are on this list, but not, say, the Against Malaria Foundation, GiveDirectly, Giving What We Can, J-PAL, IPA, or the Humane League.
As stated, some orgs are small and so were not named, but still responded. Maybe a breakdown by the cause area for all the respondents would be more useful with the data you have already?
The post was in the negative for a bit, I think the day that it was posted or maybe the next day.
The Supervenience Theorem is quite strong and interesting, but perhaps too strong for many with egalitarian or prioritarian intuitions. Indeed, this is discussed with respect to the conditions for the theorem. In its proof, it’s shown that we should treat any problem like the original position behind the veil of ignorance (the one-person scenario; for n individuals, we treat ourselves as having probability 1/n of being any of those n individuals, and we consider only our own interests in that case), so that every interpersonal tradeoff is the same as a personal tradeoff. This is something that I’m personally quite skeptical of. In fact, if each individual ought to maximize their own expected utility in a way that is transitive and independent of irrelevant alternatives when only their own interests are at stake, then fixed-population Expected Totalism follows (for a fixed population, we should maximize the unweighted total expected utility). The Supervenience Theorem is something like a generalization of Harsanyi’s Utilitarian Theorem this way. EDIT: Ah, it seems like this link is made indirectly through this paper, which is cited.
That being said, the theorem could also be seen as an argument for Expected Totalism, if each of its conditions can be defended or to whoever leans towards accepting them.
If we’ve already given up the independence of irrelevant alternatives (whether A or B is better should not depend on what other outcomes are available), it doesn’t seem like much of an extra step to give up separability (whether A or B is better should only depend on what’s not common to A and B) or Scale Invariance, which is implied by separability. There are different ways to care about the distribution of welfares, and prioritarians and egalitarians might be happy to reject Scale Invariance this way.
Prioritarians and egalitarians can also care about ex ante priority/equality, e.g. everyone deserves a fair chance ahead of time, and this would be at odds with Statewise Supervenience. For example, given H=heads and T=tails, each with probability 0.5, they might prefer the second of these two options, since it looks fairer to Adam ahead of time, as he actually gets a chance at a better life. Statewise Supervenience says these should be equivalent:
If someone cares about ex post equality, e.g. the final outcome should be fair to everyone in it, they might reject Personwise Supervenience, because personwise-equivalent scenarios can be unfair in their final outcomes. The first option here looks unfair to Adam if H happens (ex post), and unfair to Eve if T happens (ex post), but there’s no such unfairness in the second option. Personwise Supervenience says we should be indifferent, because from Adam’s point of view, ignoring Eve, there’s no difference between these two choices, and similarly from Eve’s point of view. Note that maximin, which is a limit of prioritarian views, is ruled out.
There are, of course, objections to giving these up. Giving up Personwise Supervenience seems paternalistic, or to override individual interests if we think individuals ought to maximize their own expected utilities. Giving up Statewise Supervenience also has its problems, as discussed in the paper. See also “Decide As You Would With Full Information! An Argument Against Ex Ante Pareto” by Marc Fleurbaey and Alex Voorhoeve, as well as one of my posts which fleshes out ex ante prioritarianism (ignoring the problem of personal identity) and the discussion there.
There’s also a video in which the author presents the work there. Here’s the direct link.
Regarding the definition of the Asymmetry,
2. If the additional people would certainly have good lives, it is permissible but not required to create them
is this second part usually stated so strongly, even in a straight choice between two options? Normally I only see “not required”, not also “permissible”, but then again, I don’t normally see it as a comparison of two choices only. This rules out average utilitarianism, critical-level utilitarianism, negative utilitarianism, maximin and many other theories which may say that it’s sometimes bad to create people with overall good lives, all else equal. Actually, basically any value-monistic consequentialist theory which is complete, transitive and satisfies the independence of irrelevant alternatives and non-antiegalitarianism, and avoids the repugnant conclusion is ruled out.
What if we redefine rationality to be relative to choice sets? We might not have to depart too far from vNM-rationality this way.
The axioms of vNM-rationality are justified by Dutch books/money pumps and stochastic dominance, but the latter can be weakened, too, since many outcomes are indeed irrelevant, so there’s no need to compare to them all. For example, there’s no Dutch book or money pump that only involves changing the probabilities for the size of the universe, and there isn’t one that only involves changing the probabilities for logical statements in standard mathematics (ZFC); it doesn’t make sense to ask me to pay you to change the probability that the universe is finite. We don’t need to consider such lotteries. So, if we can generalize stochastic dominance to be relative to a set of possible choices, then we just need to make sure we never choose an option which is stochastically dominated by another, relative to that choice set. That would be our new definition of rationality.
Here’s a first attempt:
Let C be a set of choices or probabilistic lotteries over outcomes (random variables), and let O be the set of all possible outcomes which have nonzero probability in some choice from C (or something more general to accommodate general probability measures). Then for X,Y∈C , we say X stochastically dominates Y with respect to C if:
for all z∈O, and the inequality is strict for some z∈O. This can lift comparisons using <C, a relation ⊆O×O, between elements of O to random variables over the elements of O. <C need not even be complete over O or transitive, but stochastic dominance thus defined will be transitive (perhaps at the cost of losing some comparisons). <C could also actually be specific to C, not just to O.
We could play around with the definition of O here.
When we consider choices to make now, we need to model the future and consider what new choices we will have to make, and this is how we would avoid Dutch books and money pumps. Perhaps this would be better done in terms of decision policies rather than a single decision at a time, though.
(This approach is based in part on “Exceeding Expectations: Stochastic Dominance as a General Decision Theory” by Christian Tarsney, which also helps to deal with Pascal’s wager and Pascal’s mugging.)
Saying terrifying things can be costly, both socially and reputationally (and there’s also the possible side effect of, well, making people terrified).
Is this the case in the AI safety community? If the reasoning for their views isn’t obviously bad, I would guess that it’s “cool” to say unpopular or scary but not unacceptable things, because the rationality community has been built in part on this.
I’m not sure how important Krogh’s Principle is in animal cognition research of the kind we’re interested in; my impression is that the animals that are studied are primarily animals that are well-studied, like fruit flies, bees, mice, rats, cats, dogs, farmed animals and the stereotypically smart ones (corvids, parrots, elephants, cetaceans, primates), and the animals EAs are interested in fall into these groups. When I want to know about chicken cognition, I just look for studies on chickens. It’s worth mentioning that Rethink Priorities stuck to relatively narrow taxons in their report.
I do agree that this research is likely to be biased overall to produce more positive than could be reproduced or generalized. However, I also think that the priors are already very skeptical (e.g. Morgan’s canon over Occam’s razor and despite common descent) so scientists are also likely to attribute fewer and less complex mental states to animals than I think best explains the evidence, and it’s pretty clear that we’ve systematically underestimated their capacities, so it’s likely the current state of research underestimates them overall, too.
Or, rather, researchers aren’t using Bayesian reasoning in the first place, so they aren’t really using priors at all in interpreting evidence; I think Morgan’s canon is more like a p-value threshold than a prior.
Of course, we can just use our own priors in interpreting the evidence, and in doing so, we should take into account biases towards positive results in research.
Also, I think that at least some researchers are less likely to discuss their estimates publicly if they’re leaning towards shorter timelines and a discontinuous takeoff, which subjects the public discourse on the topic to a selection bias.
Why do you think this?
EDIT: Ah, Matthew got to it first.
I think another large part of the focus comes from their views on population ethics. For example, in the article, you can “save” people by ensuring they’re born in the first place:
Let’s explore some hypothetical numbers to illustrate the general concept. If there’s a 5% chance that civilisation lasts for ten million years, then in expectation, there are 5000 future generations. If thousands of people making a concerted effort could, with a 55% probability, reduce the risk of premature extinction by 1 percentage point, then these efforts would in expectation save 28 future generations. If each generation contains ten billion people, that would be 280 billion lives saved. If there’s a chance civilisation lasts longer than ten million years, or that there are more than ten billion people in each future generation, then the argument is strengthened even further.
I discuss this further in my section “Implications for EA priorities” in this post of mine. I recommend trying this tool of theirs.
When you say “we do not invest in _ research”, do you mean EAs specifically, or all humans? It’s worth noting some people not associated with EA will probably do research in each area regardless.
The probability that if we do not invest in X-risk reduction research (but we invest in wild animal suffering reduction research instead), humans will go extinct and animals will not go extinct, and if we do invest in that X-risk research, humans will not go extinct, is p.
I’m having trouble understanding this probability. I don’t think it can be interpreted as a single event (even conditionally), unless you’re thinking of probabilities over probabilities or probabilities over statements, not actual events that can happen at specific times and places (or over intervals of time, regions in space).
X = humans go extinct
XA = non-human animals go extinct
RX = we invest in X-risk reduction research (or work, in general)
RW = we invest in WAS research (or work, in general)
Then the probability of “if we do not invest in X-risk reduction research (but we invest in wild animal suffering reduction research instead), humans will go extinct and animals will not go extinct” looks like
while the probability of “if we do invest in that X-risk research, humans will not go extinct” looks like
The events being conditioned on between these two probabilities are not compatible since the first has not RX, while the second has RX. So, I’m not sure taking their product would be meaningful either. I think it would make more sense to multiply these two probabilities by the expected value of their corresponding events and just compare them. In general, you would calculate:
Where V is the value, RX is now the level of investment in X-risk work, RW is now the level of investment in WAS work and V is the aggregate value. Then you would compare this for different values of r and s, i.e. different levels of investment (or compare the partial derivatives with respect to each of r and s, at a given level of r and s; this would tell you the marginal expected value of extra resources going to each of X-risk work and WAS work).
With X being 1 if humans go extinct and 0 otherwise (the indicator function), XA being 1 if non-humans animals go extinct and 0 otherwise, and V depending on them, that expected value could further be broken down to get
You specify further that
This probability is the product of the probability that there will be a potential extinction event (e.g. 10%), the probability that, given such an event, the extra research in X-risk reduction (with the resources that would otherwise have gone to wild animal suffering research) to avoid that extinction event is both necessary and sufficient to avoid human extinction (e.g. 1%) and the probability that animals will survive the extinction event even if humans do not (e.g. 1%).
But you’re conditioning on the probability of a potential extinction event as if X-risk reduction research has no effect on it, only the probability of actual human extinction from that event; X-risk research aims to address both.
The probability that A is “both necessary and sufficient” for B is also a bit difficult to think about. One way might be the following, but I think this would be difficult to work with, too:
I think it’s plausible that changing incentives and “better” options coming along might explain a lot of the drift. However, rather than “Power. Survival. Prestige. Odds of procreation.”, I think they’ll be less selfish things like family, or just things they end up finding more interesting ; maybe they’ll just get bored with EA.
However, I think you underestimate how deeply motivated many people are to help others for their own sake, out of a sense of duty or compassion. Sure, this probably isn’t most people, and maybe not even most EAs, although I wouldn’t be surprised if it were.
Have the concept of speciesism and the argument from species overlap/marginal cases been important for animal protection? I’d attribute them largely to philosophers.
I think we should also look at the influence the EA community has and where its ideas come from. What would EA look like without a given idea from philosophy?