80,000 Hours has a cause quiz, possibly a bit dated and sometimes a bit buggy (sometimes you see the rankings during the quiz, sometimes you only see them at the end, and sometimes there’s an extra question).
Question 4 is particularly relevant fvor person-affecting views, but it might not get at your specific views, since there are many different kinds of person-affecting views:
Question 4: Here’s two scenarios:
A nuclear war kills 90% of the human population, but we rebuild and civilization eventually recovers.
A nuclear war kills 100% of the human population and no people live in the future.
How much worse is the second scenario?
Besides the causes listed there, there could also be mental health and pain relief, and since you think death is bad, cryonics and life extension.
Whether or not you think it’s bad to bring absolutely miserable lives into existence (the asymmetry), that could have important consequences. If you do think it’s bad, then the longterm future could matter a lot.
Your response to the nonidentity problem also matters. Essentially, do you think if either A or B will be born, and the value in (total quality of) their lives will be X and Y, respectively, with X < Y, does it matter to you whether A or B is born? Is this the same to you as whether A is born and lives with value X or Y? As an example, if a couple wants to have a child, but the mother has been infected with the Zika virus, considering only the effects on the child, should the couple wait to conceive until it’s unlikely the child would be affected by Zika? If they wait, a different child will be born. If you don’t think it matters whether A or B is born, regardless of X and Y (even if one or either would be miserable), then basically the longterm future shouldn’t matter to you.
If you do think it’s bad to bring bad lives into existence or that it matters whether A or B is born (considering only their interests), then the longterm future could still matter a lot, and assuming you do focus on the longterm future (you might still have empirical doubts) your focus would be on preventing s-risks or ensuring its quality is as good as possible, conditional on moral patients existing, but not ensuring moral patients exist for their own sake. See the link about s-risks, trammell’s answer about this paper, or the talk about that paper here.
80,000 Hours has a cause quiz, possibly a bit dated and sometimes a bit buggy (sometimes you see the rankings during the quiz, sometimes you only see them at the end, and sometimes there’s an extra question).
Question 4 is particularly relevant fvor person-affecting views, but it might not get at your specific views, since there are many different kinds of person-affecting views:
Besides the causes listed there, there could also be mental health and pain relief, and since you think death is bad, cryonics and life extension.
Whether or not you think it’s bad to bring absolutely miserable lives into existence (the asymmetry), that could have important consequences. If you do think it’s bad, then the longterm future could matter a lot.
Your response to the nonidentity problem also matters. Essentially, do you think if either A or B will be born, and the value in (total quality of) their lives will be X and Y, respectively, with X < Y, does it matter to you whether A or B is born? Is this the same to you as whether A is born and lives with value X or Y? As an example, if a couple wants to have a child, but the mother has been infected with the Zika virus, considering only the effects on the child, should the couple wait to conceive until it’s unlikely the child would be affected by Zika? If they wait, a different child will be born. If you don’t think it matters whether A or B is born, regardless of X and Y (even if one or either would be miserable), then basically the longterm future shouldn’t matter to you.
If you do think it’s bad to bring bad lives into existence or that it matters whether A or B is born (considering only their interests), then the longterm future could still matter a lot, and assuming you do focus on the longterm future (you might still have empirical doubts) your focus would be on preventing s-risks or ensuring its quality is as good as possible, conditional on moral patients existing, but not ensuring moral patients exist for their own sake. See the link about s-risks, trammell’s answer about this paper, or the talk about that paper here.