Hey everyone! I’m very interested in Effective Altruism, and most of my information on it comes from 80,000 Hours’ website. The info is very useful, but, as the title of this question suggests, I hold person-affecting views, so occurs to me that the world’s largest-scale and most serious problems might be different in my own worldview than in theirs (if you aren’t familiar with the term, person-affecting views are views that actions are only morally relevant to beings that will exist independent of whether or not the action is taken; for example, I think the world ending would be bad because 7 billion people would die, but not because their descendants were prevented from ever being born). Does anyone have thoughts for where I can find problem profiles and recommendations for an Effective Altruism lifestyle based on a person-affecting worldview (especially for a conservative Christian worldview)?
I’m struggling to think of much written on this topic—I’m a philosopher and reasonably sympathetic to person-affecting views (although I don’t assign them my full credence) so I’ve been paying attention to this space. One non-obvious consideration is whether to take an asymmetric person-affecting view (extra happy lives have no value, extra unhappy lives has negative value) or a symmetric person-affecting view (extra lives have no value).
If the former, one is pushed towards some concern for the long-term anyway, as Halstead argues here, because there will be lots of unhappy lives in the future it would be good to prevent existing.
If the latter—which I think, after long-reflection, is the more plausible version, even though it is more prima facie unintuitive—then that is practically sufficient, but not necessary, for concentrating on the near-term, i.e. this generation of humans; animals won’t, for the most part, exist whatever we choose to do. I say not necessary because one could, in principle, think all possible lives matter and still focus on near-humans due to practical considerations.
But ‘prioritise current humans’ still leaves it wide-open what should you do. The ‘canonical’ EA answer for how to help current humans is by working on global (physical) health and development. It’s not clear to me that this is the right answer. If I can be forgiven for tooting my own horn, I’ve written a bit about this in this (now somewhat dated) post on mental health, the relevant section being “why might you—and why might you not—prioritise this area [i.e. mental health]”.
You could rescue or even buy animals from factory farms. Plausibly, doing this for factory farmed chickens could be very cost-effective with such person-affecting views. Buying them from factory farms in developing countries, especially, perhaps. Buying factory farmed animals would be pretty uncooperative with the rest of the animal movement, though, and if you assign some moral weight to asymmetric or symmetric totalist views, this could be pretty bad in expectation (although the expected effect on supply is less than one per animal saved, so this might not look actively harmful with symmetric views).
EDIT: The value of information question is interesting. Suppose it would take you 2 months to research and carry out a rescue/buy for factory farmed chickens raised for meat. Then it wouldn’t be worth even looking into, because the chickens alive when you start will all have been killed already. But if someone does enough of the work for you that you could do it within about a month, then it could be worth it to do. Egg-laying hens live longer, probably about a year or two.
Working on abortion might be similar for someone who thought death was bad.
Yes, agree you could save existing animals. I’d actually forgotten until you jogged my memory, but I talk about that briefly in my thesis (chapter 3.3, p92) and suppose saving animals from shelters might be more cost-effective than saving humans (given a PAV combined with deprivationism about the badness of death).
If I weren’t interested in creating more new beings with positive lives I’d place greater priority on:
Ending the suffering and injustice suffered by animals in factory farming
Ending the suffering of animals in the wilderness
Slowing ageing, or cryonics (so the present generation can enjoy many times more positive value over the course of their lives)
Radical new ways to dramatically raise the welfare of the present generation (e.g. direct brain stimulation as described here)
I haven’t thought much about what would look good from a conservative Christian worldview.
Welcome!
It’s a common view. Some GiveWell staff hold this view, and indeed most of their work involves short-term effects, probably for epistemic reasons. Michael Plant has written about the EA implications of person-affecting views, and emphasises improvements to world mental health.
Here’s a back-of-the-envelope estimate for why person-affecting views might still be bound to prioritise existential risk though (for the reason you give, but with some numbers for easier comparison).
Dominic Roser and I have also puzzled over Christian longtermism a bit.
This paper is also relevant to the EA implications of a variety of person-affecting views. https://globalprioritiesinstitute.org/wp-content/uploads/2020/Teruji_Thomas_asymmetry_uncertainty.pdf
There’s also a talk version here: https://www.youtube.com/watch?v=DAavPa8j0lM
If you think that embryos and fetuses have moral value, then abortion becomes a very important issue in terms of scale. However, it’s not very neglected, and the evidence suggests that increased access to contraceptives, not restricted access to abortion services, is driving the decline in abortion rates in the U.S.
Designing medical technology to reduce miscarriages (which are spontaneous abortions) may be an especially important, neglected, and tractable way to prevent embryos/fetuses and parents from suffering. (10-50% of pregnancies end in miscarriages.)
The linked opinion piece asserts that abortion regulations are not responsible for the improvement, but doesn’t seem to provide any evidence to back it up?
I am not that familiar with the literature, but it would seem prima facie rather implausible to me that making something illegal wouldn’t help reduce its prevalence. If statistics suggest the US decline is being driven by other policies, I would guess this is because the restrictions that have been put in place are quite weak—abortion-for-convenience remains legal in all 50 states, and even a your state did impose some limitation, they cannot stop someone travelling to an unregulated state. However, a quick google suggests that some academic research does find that the restrictions that have been put in place have helped reduce the rate. Additionally, it seems that the number of abortions in Ireland has gone up significantly since their law change, even taking into account people travelling to the UK, so presumably reversing that change would help reduce the number. This also fits with my impression of what has happened in other many countries when they banned/unbanned abortion.
I totally agree that reducing miscarriage rates could be very interesting. Are you aware of any tractable interventions? I had a little look a few years ago but did not find anything very satisfactory.
Plausibly, feotuses will not be morally relevant on such a view as they won’t exist whatever we choose to do.
It would be interesting if person-affecting arguments lead one to pass on reducing abortion, because while you care about currently existing babies, by the time any intervention you might support today will have any effect, they will have already been born or not, and hence too late to help. There will be a new cohort in need of help, of course, but you don’t care about them until they’re conceived, so won’t be interested in working to help them now.
More generally, you would neglect any intervention that only affects people under the age of X if it will take longer than X years to implement the intervention.
However, if such an initiative was started by longtermists, person-affecting-view-ists might join it half way through. This suggests an interesting way for longtermists to leverage* the help of people with person-affecting views! (It is possible you might think it was immoral to exploit their temporal inconsistency in this way however).
This is assuming that death isn’t bad, though, right? In a sense, the fetus exists in the whole of the outcome, past, present and future together, regardless of what we do, and then it becomes a question of whether or not a longer life can be better on such an account for a fetus (and whether or not fetuses should count). New_EA did write:
EDIT: Ah, did you mean we’d always be too late? On a wide person-affecting view, the future ones could still matter.
This might not be the case if you have a narrow person-affecting view so that whether A or B is born doesn’t matter, even if one would be substantially better off than the other (see my answer on the nonidentity problem). In that case, the fetuses that don’t yet exist (or those that won’t exist until after some point) might not matter, because which ones would come to exist could be sensitive to your actions (think butterfly effect). Then, the scale of the problem is restricted to the fetuses whose identities are already determined, and you might be too late to help almost all of them.
Same conclusion with presentist views, so that only those that currently exist matter.
EDIT: Larks made the same point.
80,000 Hours has a cause quiz, possibly a bit dated and sometimes a bit buggy (sometimes you see the rankings during the quiz, sometimes you only see them at the end, and sometimes there’s an extra question).
Question 4 is particularly relevant fvor person-affecting views, but it might not get at your specific views, since there are many different kinds of person-affecting views:
Besides the causes listed there, there could also be mental health and pain relief, and since you think death is bad, cryonics and life extension.
Whether or not you think it’s bad to bring absolutely miserable lives into existence (the asymmetry), that could have important consequences. If you do think it’s bad, then the longterm future could matter a lot.
Your response to the nonidentity problem also matters. Essentially, do you think if either A or B will be born, and the value in (total quality of) their lives will be X and Y, respectively, with X < Y, does it matter to you whether A or B is born? Is this the same to you as whether A is born and lives with value X or Y? As an example, if a couple wants to have a child, but the mother has been infected with the Zika virus, considering only the effects on the child, should the couple wait to conceive until it’s unlikely the child would be affected by Zika? If they wait, a different child will be born. If you don’t think it matters whether A or B is born, regardless of X and Y (even if one or either would be miserable), then basically the longterm future shouldn’t matter to you.
If you do think it’s bad to bring bad lives into existence or that it matters whether A or B is born (considering only their interests), then the longterm future could still matter a lot, and assuming you do focus on the longterm future (you might still have empirical doubts) your focus would be on preventing s-risks or ensuring its quality is as good as possible, conditional on moral patients existing, but not ensuring moral patients exist for their own sake. See the link about s-risks, trammell’s answer about this paper, or the talk about that paper here.
The Effective Altruism for Christians website and Facebook group might be a useful place to start, if you haven’t come across those before.
I don’t think they have developed problem profiles etc., but the people there may have a similar outlook to you and be able to point you to resources that are more relevant from a Christian and/or person-affecting perspective.
Even if you’re just 99% sure that Christianity is true, it might still make sense to focus on worlds where it’s false given in the world where it’s true we already have an aligned superintelligence, and are all immortal.
The book The Ethics of Cryonics: Is it Immoral to be Immortal? talks about cryopreserving all fetuses. Cryonics might also be the only way to bring people currently existing to a time when they can live rich and long lives.
Hey there!
The universe/multiverse may be very large and (in the fullness of time) may contain a vast number of beings that we should care about and that we (and other civilizations similar to us) may be able to help in some way by using our cosmic endowment wisely. So person-affecting views seem to prescribe the standard maxipok strategy (see also The Precipice by Toby Ord).
[EDIT: by “we should care” I mean something like “we would care if we knew all the facts and had a lot of time to reflect”.]
I think you might not have clocked the OP’s comment that the morally relevant being as just those that exist whatever we do, which would presumably rule out concerns for lives in the far future.*
*Pedantry: there could actually be future aliens who exist whatever we do now. Suppose some aliens will turn up on Earth in 1 million years and we’ve had no interaction with them. They will be ‘necessary’ from our perspective and thus the type of person-affecting view stated would conclude such people matter.**
**Further pedantry: if our actions changed their children, which they presumably would, it would just be the first generation of extraterrestrial visitors who mattered morally on this view.
It doesn’t seem like mere pedantry if it requires substantial revision of the view to retain the same action recommendations. Symmetric person-affecting total utilitarianism does look to be dominated by these sorts of possibilities of large stocks of necessary beings without some other change. I’m curious what your take on the issues raised in that post is.
What I tried to say is that the spacetime of the universe(s) may contain a vast number of sentient beings regardless of what we do. Therefore, achieving existential security and having something like a Long Reflection may allow us to help a vast number of sentient beings (including ones outside our future light cone).
I think we’re not interpreting the person-affecting view described in the OP in the same way. The way I understand the view (and the OP is welcome to correct me if I’m wrong) it entails we ought to improve the well-being of the extraterrestrial visitors’ children (regardless of whether our actions changed them / caused their existence).
oh wow, this made me updated towards caring about people in the future even if the person-affecting view is true (because we might not change their existence if they are both in the future *and* in a far away location)