I’m Anthony DiGiovanni, a suffering-focused AI safety researcher at the Center on Long-Term Risk. I (occasionally) write about altruism-relevant topics on my Substack. All opinions my own.
Anthony DiGiovanni
My guess is the figure is so small at least partly because of an assumption that the default expected value of the far future is high already. If this is the case, then someone who expects disvalue to be far more prominent in the future all else equal will consider this increase in humane values much more important, relatively speaking.
Question 13 seems under-specified to me, specifically this part: “Their members are equally happy.” Does this mean their level of welfare is the same, but it could be at any level for the purposes of this question? Does the use of “happy” in particular mean the question assumes this constant level of welfare is net positive? Could the magnitudes of happiness and suffering differ between people as long as the “net welfare” is positive, assuming it’s possible to make that aggregation?
I think these questions matter because they influence your interpretation of the answers as either a result of population ethical factors, or other things like the respondents’ beliefs about the moral weight of happiness vs suffering. Someone could coherently accept totalism yet consider the smaller world better if, for instance, they think the higher number of cases of the extreme tails of suffering in the larger population (just because there are more people that things could go very wrong for) makes it worse.
A priori I expect suffering focused intuitions to be in the minority, but in any case it’s not obvious that the answers to #13 reveal non-totalist or irrational population ethics among the respondents.
“Trivial, but in a Derek Parfit way” is honestly the highest compliment I could ever receive.
I see, thank you—wasn’t sure what might have been hidden in “Other.” :)
A better alternative is to recognize that our own future selves, and our descendants, will be able to “debug” the unpredictable consequences of the actions we take and systems we create. They can do this by creating sustainable alternatives, building resiliency, and improving their planning and evaluation. They will be motivated by self-interest to do so, and enabled by their increasing knowledge. [emphasis mine]
This point doesn’t hold in the case of animal welfare. This might seem like a minor nitpick on my part, but for EAs who prioritize animal welfare yet are also concerned about long-term effects, it’s a pretty crucial thing to note. Indeed I’d suspect that taking an approach of going with what seems best right now (without more thoroughly investigating the long-term consequences that we could in principle discover upon reflection) could harm the reputation of animal welfare activism, because this would seem especially reckless given that animals aren’t in a position to save themselves from the negative consequences of our choices.
An analogous point holds more weakly even for human-centric causes, I think. Just because future humans will be in a position to debug interventions we make in the present, that doesn’t make it prudent for us to neglect the work of considering the (often conflicting) long-term effects that we could identify if we worked harder. I worry that this attitude places a burden on future people that they didn’t ask for, unless I’m misunderstanding your general claim.
I think the following is a typo:
not coming to exist at all would be strictly worse than coming to exist with non-maximal utility
The transitivity argument you presented shows that it’s strictly better.
Nitpicks aside, thank you for sharing these ideas! I think identifying that interests (or desires associated with experiences) are the morally relevant objects rather than persons is crucial.
While I see the intuitive appeal of this idea, it honestly seems a bit ad hoc. The physics analogy is interesting, yes, but we should be careful not to mistake the practical usefulness of local level deontology or virtue ethics for an actual normative difference between levels. If we just accept the local heuristics as useful for social cohesion etc. without critically assessing whether we could do better, we run the risk of not actually improving sentient experience—just rationalizing standards that mainly exist because they were evolutionarily expedient, or maintain some power structure.
To be more specific, it’s very much an open question whether trying to be a “good” friend/family member, in ways that significantly privilege your friends/family over others, actually achieves more good in the long run. It seems very unlikely to me that, say, (A) buying or making a few hundred dollars’ worth of presents for people during holidays (reciprocated with similar presents, many of which in my experience honestly haven’t been worth the money even though I appreciate their thought) makes the world a better place than (B) spending that money/time on the seemingly cold utilitarian choice.
The usual objection to this is that B weakens social bonds or makes people trust you less. But: (1) from the perspective of the people or animals you’d be helping by choosing B, those bonds and small degrees of weakened trust would probably seem paltry and frivolous by comparison to their suffering. There also doesn’t seem to be much robust evidence supporting this claim anyway, it’s just an intuition I’ve seen repeated without justification. (2) It’s possible that this is one of several social norms that we can change over time by challenging the assumption that it’s eternal; in the short run, perhaps people think of you as cold or weird, but if enough people follow suit, maybe refusing to waste money on trivialities for holidays could become normal. Omnivores have argued that veganism threatens social bonds and the (particularly American) culture of eating meat together; c.f. this article. I think that that argument is self-evidently weak in the face of great animal suffering, so analogously it isn’t a stretch to suppose that deontological norms we currently consider necessary for social cohesion are disposable, if we challenge them.
I can also imagine being persuaded that AI alignment research is as important as I think but something else is even more important, like maybe s-risks or some kind of AI coordination thing.
Huh, my impression was that the most plausible s-risks we can sort-of-specifically foresee are AI alignment problems—do you disagree? Or is this statement referring to s-risks as a class of black swans for which we don’t currently have specific imaginable scenarios, but if those scenarios became more identifiable you would consider working on them instead?
If I remember correctly, the 2019 survey asked about utilitarians’ identification as classical vs. negative utilitarian, plus some other distinctions. Will those results be included in a future post? I’m very curious to see them.
Great post, Abraham!
You mention “preventing x-risks that pose specific threats to animals over those that only pose threats to humans”—which examples of this did you have in mind? It’s hard for me to imagine a risk factor for extinction of all nonhuman wildlife that wouldn’t also apply to humans, aside from perhaps an asteroid that humans could avoid by going to some other planet but humans would not choose to protect wild animals from by bringing them along. Though I haven’t spent much time thinking about non-AI x-risks so it’s likely the failure is in my imagination.
I think it’s also worth noting that the takeaway from this essay could be that x-risk to humans is primarily bad not because of effects on us/our descendants, but because of the wild animal suffering that would not be relieved in our absence. I’m not sure this would make much difference to the priorities of classical utilitarians, but it’s an important consideration if reducing suffering is one’s priority.
Got it, so if I’m understanding things correctly, the claim is not that many longtermists are necessarily neglecting x-risks that uniquely affect wild animals, just that they are disproportionately prioritizing risks that uniquely affect humans? That sounds fair, though like other commenters here the crux that makes me not fully endorse this conclusion is that I think, in expectation, artificial sentience could be larger than that of organic humans and wild animals combined. I agree with your assessment that this isn’t something that many (non-suffering-focused) longtermists emphasize in common arguments, though; the focus is still on humans.
I like this analysis! Some slight counter-considerations:
Displacements can also occur in donations, albeit probably less starkly than with jobs, which are discrete units. If my highest priority charity announces its funding gap somewhat regularly, and I donate a large fraction of that gap, this would likely lower the expected amount donated by others to this charity and this difference might be donated to causes I consider much less important. (Thanks to Phil Trammell for pointing out this general consideration; all blame for potentially misapplying it to this situation goes to me.)
Also, in the example you gave where about 10% of people highly prioritize cause A, wouldn’t we expect the multiplier to be significantly larger than 0.1 because conditional on a person applying to position P, they are quite likely to have a next best option that is closely aligned with yours? Admittedly this makes my first point less of a concern since you could also argue that the counterfactual donor to an unpopular cause I highly prioritize would go on to fund similar, probably neglected causes.
Do non-utilitarian moral theories have readily available solutions to infinite ethics either? Suggesting infinite ethics as an objection I think only makes sense if it’s a particular problem for utilitarianism, or at least a worse problem for utilitarianism than for anything else.
I’d also recommend the very repugnant conclusion as an important objection (at least to classical or symmetric utilitarianism).
the problem comes from trying to compare infinite sets of individuals with utilities when identities (including locations in spacetime) aren’t taken to matter at all
Ah, that’s fair—I think I was mistaking the technical usage of “infinite ethics” for a broader class of problems involving infinities in ethics in general. Deonotological theories sometimes imply “infinite” badness of actions, which can have counterintuitive implications as discussed by MacAskill in his interviews with 80k, which is why I was confused by your objection.
Some people find it prohibitively costly
This isn’t a “minds very different from our own” claim, though. It’s an empirical claim about how expensive a vegan diet needs to be to be nutritious. Cam stated: “But it’s also quite feasible to meet most people’s dietary requirements with vegan foods that cost just as much as, or even less than, animal-based foods.” What exactly in that statement do you dispute?
ETA: Even though there is a risk in overstating the case that veganism is universally “cheap,” at present it seems that case is far understated. I think the value of Cam’s comment is in noting that veganism is at the very least cheaper than most people suspect before trying it.
One thought to have about this case is that you have the wrong motivation in visiting your friend. Plausibly, your motive should be something like ‘my friend is suffering; I want to help them feel better!’ and not ‘helping my friend has better consequences than anything else I could have done.’ Imagine what it would be like to frankly admit to your friend, “I’m only here because being here had the best consequences. If painting a landscape would have led to better consequences, I would have stayed home and painted instead.” Your friend would probably experience this remark as cold, or at least overly abstract and aloof.
This doesn’t resonate with me at all, personally. What exactly could be a purer, warmer motivation for helping a friend than the belief that helping them is the best thing you could be doing with your time? That belief implies their well-being is very important; it’s not just an abstract consequence, their suffering really exists and by helping them you are choosing to relieve it.
I’m still confused by this. The more impartial someone’s standards, if anything, the more important you should feel if they still choose to prioritize you.
It’s more circumstantial if they prioritize you based on impartial concern; it just happened to be the best thing they could do.
Hm, to my ear, prioritizing a friend just because you happen to be biased towards them is more circumstantial. It’s based on accidents of geography and life events that led you to be friends with that person to a greater degree than with other people you’ve never met.
that’s pretty small compared to the impartial stakes we face
I agree, though that’s a separate argument. I was addressing the claim that conditional on a consequentialist choosing to help their friend, their reasons are alienating, which I don’t find convincing. My point was precisely that because the standard is so high for a consequentialist, it’s all the more flattering if your friend prioritizes you in light of that standard. It’s quite difficult to reconcile with my revealed priorities as someone who definitely doesn’t live up to my own consequentialism, yes, but I bite the bullet that this is really just a failure on my part (or, as you mention, the “instrumental” reasons to be a good friend also win over anyway).
the reason you maintain and continue to value the relationship is not so circumstantial, and has more to do with your actual relationship with that other person
Right, but even so it seems like a friend who cares for you because they believe caring for you is good, and better than the alternatives, is “warmer” than one who doesn’t think this but merely follows some partiality (or again, bias) toward you.
I suppose it comes down to conflicting intuitions on something like “unconditional love.” Several people, not just hardcore consequentialists, find that concept hollow and cheap, because loving someone unconditionally implies you don’t really care who they are, in any sense other than the physical continuity of their identity. Conditional love identifies the aspects of the person actually worth loving, and that seems more genuine to me, though less comforting to someone who wants (selfishly) to be loved no matter what they do.
I suppose the point is that you don’t recognize that reason as an ethical one; it’s just something that happens to explain your behaviour in practice, not what you think is right.
Yeah, exactly. It would be an extremely convenient coincidence if our feelings for partial friendship etc., which evolved in small communities where these feelings were largely sufficient for social cohesion, just happened to be the ethically best things for us to follow - when we now live in a world where it’s feasible for someone to do a lot more good by being impartial.
Edit: seems based on one of your other comments that we actually agree more than I thought.
Agree on the “should” part! As for “can”: a potentially valuable side project someone (perhaps myself, with the extra time I’ll have on my hands before grad school) might want to try is looking for empirical predictors of success in priority fields. Something along these lines, although unfortunately the linked paper’s formula wouldn’t be of much use to people who haven’t already entered academia.