The person-affecting value of existential risk reduction
Introduction
The standard motivation for the far future cause area in general, and existential risk reduction in particular, is to point to the vast future that is possible providing we do not go extinct (see Astronomical Waste). One crucial assumption made is a âtotalâ or âno-differenceâ view of population ethics: in sketch, it is just as good to bring a person into existence with a happy life for 50 years as it is to add fifty years of happy life to someone who already exists. Thus the 10lots of potential people give profound moral weight to the cause of x-risk reduction.
Population ethics is infamously recondite, and so disagreement with this assumption commonplace; many find at least some form of person affecting/âasymmetrical view plausible: that the value of âmaking happy peopleâ is either zero, or at least much lower than the value of making people happy. Such a view would remove a lot of the upside of x-risk reduction, as most of its value (by the lights of the total view) is ensuring a great host of happy potential people exist.
Yet even if we discount the (forgive me) person-effecting benefit, extinction would still entail vast person-affecting harm. There are 7.6 billion people alive today, and 7.6 billion premature deaths would be deemed a considerable harm by most. Even fairly small (albeit non-pascalian) reductions in the likelihood of extinction could prove highly cost-effective.
To my knowledge, no one has âcrunched the numbersâ on the expected value of x-risk reduction by the lights of person affecting views. So Iâve thrown together a guestimate as a first-pass estimate.
An estimate
The (forward) model goes like this:
There are currently 7.6 billion people alive on earth. The worldwide mean age is 38, and worldwide life expectancy is 70.5.
Thus, very naively, if âeveryone died tomorrowâ, the average number of life years lost per person is 32.5, and the total loss is 247 Billion life years.
Assume the extinction risk is 1% over this century, uniform by year (i.e. the risk this year is 0.0001, ditto the next one, and so on.)
Also assume the tractability of x-risk reduction is something like (borrowing from Millett and Snyder-Beattie) this: âThereâs a project X that is expected to cost 1 billion dollars each year, and would reduce the risk (proportionately) by 1% (i.e. if we spent a billion each year this century, xrisk over this century declines from 1% to 0.99%).
This gives a risk-reduction per year of around 1.3 * 10-6 , and so an expected value of around 330 000 years of life saved.
Given all these things, the model spits out a mean âcost per life yearâ of $1500-$26000 (mean $9200).
Caveats and elaborations
The limitations of this are nigh-innumerable, but I list a few of the most important below an approximately ascending order.
Zeroth: The model has a wide range of uncertainty, and reasonable sensitivity to distributional assumptions: you can modulate mean estimate and range by a factor of 2 or so by whether the distributions used are Beta, log normal, or tweaking their variance.
First: Adjustment to give âcost per DALY/âQALYâ would be somewhat downward, although not dramatically (a factor of 2 would imply everyone who continues to live does so with a disability weight of 0.5, in the same ballpark as those used for major depression or blindness).
Second, trends may have a large impact, although their importance is modulated by which person-affecting view is assumed. I deliberately set up the estimate to work in a âone shotâ single year case (i.e. the figure applies to a âspend 1B to reduce extinction risk in 2018 from 0.0001 to 0.000099â scenario).
By the lights of a person-affecting view which considers only people who exist now, making the same investment 10 years from now (i.e. spent 1B to reduce extinction risk in 2028 from 0.0001 to 0.000099) is less attractive, as some of these people would have died, and the new people who have replaced them have little moral relevance. These views thus imply a fairly short time horizon, and are particularly sensitive to x-risk in the near future. Given the â1%â per century is probably not uniform by year, and plausibly lower now but higher later, this would imply a further penalty to cost-effectiveness.
Other person affecting views consider people who will necessarily exist (however cashed out) rather than whether they happen to exist now (planting a bomb with a timer of 1000 years is still accrues person-affecting harm). In a âextinction in 100 yearsâ scenario, this view would still count the harm of everyone alive then who dies, although still discount the foregone benefit of people who âcould have beenâ subsequently in the moral calculus.
Thus the trends in factual basis become more salient. One example is the ongoing demographic transition, and the consequently older population give smaller values of life-years saved if protected from extinction in the future. This would probably make the expected cost-effectiveness somewhat (but not dramatically) worse.
A lot turns on the estimate for marginal âx-risk reductionâ. I think the numbers offered in terms of base rate, and how much it can be reduced for now much lean on the conservative side of the consensus of far-future EAs. Confidence in (implied) scale or tractability an order of magnitude impose commensurate increases on the risk estimate. Yet in such circumstances the bulk of disagreement is explained by empirical disagreement rather than a different take on the population ethics.
Finally, this only accounts for something like the (welfare) âface valueâ of existential risk reduction. There would be some further benefits by the light of the person-affecting view itself, or ethical views which those holding a person affecting view are likely sympathetic to: extinction might impose other harms beyond years of life lost; there could be person affecting benefits if some of those who survive can enjoy extremely long and happy lives; and there could be non-welfare goods on an objective list which rely on non-extinction (among others). On the other side, those with non-deprivationists accounts of the badness of death may still discount the proposed benefits.
Conclusion
Notwithstanding these challenges, I think the model, and the result that the âface valueâ cost-effectiveness of x-risk reduction is still pretty good, is instructive.
First, there is a common pattern of thought along the lines of, âX-risk reduction only matters if the total view is true, and if one holds a different view one should basically discount itâ. Although rough, this cost-effectiveness guestimate suggests this is mistaken. Although it seems unlikely x-risk reduction is the best buy from the lights of a person-affecting view (we should be suspicious if it were), given ~$10000 per life year compares unfavourably to best global health interventions, it is still a good buy: it compares favourably to marginal cost effectiveness for rich country healthcare spending, for example.
Second, although it seems unlikely that x-risk reduction would be the best buy by the lights of a person affecting view, this would not be wildly outlandish. Those with a person-affecting view who think x-risk is particularly likely, or that the cause area has easier wins available than implied in the model, might find the best opportunities to make a difference. It may therefore supply reason for those with such views to investigate the factual matters in greater depth, rather than ruling it out based on their moral commitments.
Finally, most should be morally uncertain in matters as recondite as population ethics. Unfortunately, how to address moral uncertainty is similarly recondite. If x-risk reduction is âgood but not the bestâ rather than âworthlessâ by the lights of person affecting views, this likely implies x-risk reduction looks more valuable whatever the size of the âperson affecting partyâ in oneâs moral parliament.
- Cause proÂfile: menÂtal health by Dec 31, 2018, 12:09 PM; 147 points) (
- How bad would huÂman exÂtincÂtion be? by Oct 23, 2023, 12:01 PM; 132 points) (
- 3 sugÂgesÂtions about jarÂgon in EA by Jul 5, 2020, 3:37 AM; 131 points) (
- CruÂcial quesÂtions for longtermists by Jul 29, 2020, 9:39 AM; 104 points) (
- Aug 4, 2018, 6:12 PM; 89 points) 's comment on ProbÂlems with EA repÂreÂsenÂtaÂtiveÂness and how to solve it by (
- QuesÂtionÂing the Value of ExÂtincÂtion Risk Reduction by Jul 7, 2022, 4:44 AM; 61 points) (
- Apr 7, 2022, 4:39 AM; 52 points) 's comment on âLong-TerÂmismâ vs. âExÂisÂtenÂtial Riskâ by (
- ExÂisÂtenÂtial risk as comÂmon cause by Dec 5, 2018, 2:01 PM; 49 points) (
- ClarÂifyÂing exÂisÂtenÂtial risks and exÂisÂtenÂtial catastrophes by Apr 24, 2020, 1:27 PM; 39 points) (
- Jan 23, 2021, 3:05 AM; 38 points) 's comment on [PodÂcast] Ajeya CoÂtra on worÂldÂview diÂverÂsifiÂcaÂtion and how big the fuÂture could be by (
- Jun 3, 2021, 11:29 AM; 25 points) 's comment on Help me find the crux beÂtween EA/âXR and Progress Studies by (
- [Link and comÂmenÂtary] Beyond Near- and Long-Term: Towards a Clearer AcÂcount of ReÂsearch PriÂoriÂties in AI Ethics and Society by Mar 14, 2020, 9:04 AM; 18 points) (
- How can I apÂply perÂson-afÂfectÂing views to EffecÂtive AltruÂism? by Apr 29, 2020, 4:57 AM; 17 points) (
- ExÂtincÂtion risk reÂducÂtion and moral cirÂcle exÂpanÂsion: SpecÂuÂlatÂing susÂpiÂcious convergence by Aug 4, 2020, 11:38 AM; 12 points) (
- ReÂsearch proÂject idea: NeartÂerÂmist cost-effecÂtiveÂness analÂyÂsis of nuÂclear risk reduction by Apr 15, 2023, 2:46 PM; 12 points) (
- May 3, 2018, 6:30 AM; 10 points) 's comment on AnÂnouncÂing the EffecÂtive AltruÂism HandÂbook, 2nd edition by (
- Aug 22, 2020, 7:33 PM; 9 points) 's comment on The case of the missÂing cause priÂoriÂtiÂsaÂtion research by (
- Jun 4, 2021, 12:03 PM; 6 points) 's comment on Progress studÂies vs. longterÂmist EA: some differences by (
- May 2, 2021, 6:21 PM; 5 points) 's comment on Thoughts on âThe Case for Strong LongterÂmismâ (Greaves & MacAskill) by (
- ćäșĄăȘăčăŻăæžăăćăç”ăżăæŻæăăè°è« by Aug 4, 2023, 2:47 PM; 4 points) (
- Dec 9, 2022, 4:18 PM; 4 points) 's comment on How Many Lives Does X-Risk Work Save From NonexÂisÂtence On AverÂage? by (
- Jan 14, 2021, 12:22 AM; 3 points) 's comment on A FunÂnel for Cause Candidates by (
- Aug 14, 2022, 11:49 AM; 3 points) 's comment on PriÂoriÂtizÂing x-risks may reÂquire carÂing about fuÂture people by (
- May 2, 2021, 8:11 AM; 3 points) 's comment on ProÂpose and vote on poÂtenÂtial EA Wiki entries by (
- Jun 22, 2021, 2:24 PM; 3 points) 's comment on An anÂiÂmated inÂtroÂducÂtion to longterÂmism (feat. Robert Miles) by (
- Jan 29, 2021, 7:05 AM; 2 points) 's comment on ImÂporÂtant Between-Cause ConÂsidÂerÂaÂtions: things evÂery EA should know about by (
- Jul 30, 2020, 11:32 AM; 2 points) 's comment on The acaÂdemic conÂtriÂbuÂtion to AI safety seems large by (
- Jan 18, 2020, 6:43 AM; 2 points) 's comment on The âfar fuÂtureâ is not just the far future by (
Thanks for writing this up! This does seem to be an important argument not made often enough.
To my knowledge this has been covered a couple of times before, although not as thoroughly.
Once by Oxford Prioritization Project however they approached it from the other end, instead asking âwhat absolute percentage x-risk reduction would you need to get for ÂŁ10,000 for it to be as cost effective as AMFâ and finding the answer of 4 x 10^-8%. I think your model gives ÂŁ10,000 as reducing x-risk by 10^-9%, which fits with your conclusion of close but not quite as good as global poverty.
Note they use 5% before 2100 as their risk, also do not consider QALYs, instead only looking at âlives savedâ which is likely bias them against AMF, since it mostly saves children.
We also calculated this as part of the Causal Networks Model I worked on with Denise Melchin at CEA over the summer. The conclusion is mentioned briefly here under âexistential effectivenessâ.
I think our model was basically the same as yours, although we were explicitly interested in the chance of existential risk before 2050, and did not include probabilistic elements. We also tried to work in QALYs, although most of our figures were more bullish than yours. We used by default:
7% chance of existential risk by 2050, which in retrospect seems extremely high, but I think was based on a survey from a conference.
The world population in 2050 will be 9.8 Billion, and each death will be worth â25 QALYs (so 245 billion QALYs at stake, very similar to yours)
For the effectiveness of research, we assumed that 10,000 researchers working for 10 years would reduce x-risk by 1% point (i.e. from 7% to 6%). We also (unreasonably) assumed each researcher year cost ÂŁ50,000 (where I think the true number should be at least double that, if not much more).
Our model then had various other complicated effects, modelling both âtheoreticalâ and âpracticalâ x-risk based on government/âindustry willingness to use the advances, but these were second order and can mostly be ignored.
Ignoring these second order effects then, our model suggested it would cost ÂŁ5 billion to reduce x-risk by 1% point, which corresponds to a cost of about ÂŁ2 per QALY. In retrospect this should be at least 1 or 2 orders of magnitude higher (increasing researcher cost and decreasing x-risk possibility by and order of magnitude each).
I find your x-risk chance somewhat low, I think 5% before 2100 seems more likely. Your cost-per-percent to reduce x-risk also works out as much higher than the one we used, but seems more justified (ours was just pulled from the air as âreasonable soundingâ).
I also made a very rough estimate in this article: https://ââ80000hours.org/ââarticles/ââextinction-risk/ââ#in-total-how-effective-is-it-to-reduce-these-risks Though this estimate is much better and Iâve added a link to it.
I also think x-risk over the century is over 1%, and we can reduce it much more cheaply than your guess, though itâs nice to show itâs plausible even with conservative figures.
Butterfly effects change the identities of at least all yet-to-be conceived persons, so this would have to not be interested in particular people, but population sizes/âcounterparts.
+1. Navigating this is easier said than done, and one might worry about some sort of temporal parochialism being self-defeating (persons at t1-tn are all better off if they cooperate across cohort with future-regarding efforts instead of all concerning themselves with those who are morally salient at their corresponding t).
My impression is those with person-affecting sympathies prefer trying to meet these challenges rather than accept the moral character of destructive acts change with a (long enough) delay, or trying to reconcile this with the commonsensical moral importance of more normal future-regarding acts (e.g. climate change, town planning, etc.)
Iâve been saying to people that I wish there was a post series about all the practical implications of different philosophical positions (I often have the unflattering impression philosophy EAs like to argue about them just because itâs their favourite nerd topicâand not because of the practical relevance).
So special thanks to you for starting it! ;-)
See also the models in https://ââwww.ncbi.nlm.nih.gov/ââpmc/ââarticles/ââPMC5576214/ââ (cost-effectiveness of mitigating biorisk) and https://ââonlinelibrary.wiley.com/ââdoi/ââfull/ââ10.1111/ââj.1539-6924.2007.00960.x (asteroid risk), which have estimates for the risk level, cost of reducing it, and cost per qualy for different future discount levels.
I think their model ought to include a category of catastrophic riskâthey donât have anything between disaster (100,000 deaths) and extinction.
I made a similar observation about AI risk reduction work last year:
âSomeone taking a hard âinside viewâ about AI risk could reasonably view it as better than AMF for people alive now, or during the rest of their lives. Iâm thinking something like:
1 in 10 risk of AI killing everyone within the next 50 years. Spending an extra $1 billion on safety research could reduce the size of this risk by 1%.
$1 billion /â (0.1 risk reduced by 1% 8 billion lives) = $125 per life saved. Compares with $3,000-7,000+ for AMF.
This is before considering any upside from improved length or quality of life for the present generation as a result of a value-aligned AI.
Iâm probably not quite as optimistic as this, but I still prefer AI as a cause over poverty reduction, for the purposes of helping the present generation (and those remaining to be born during my lifetime).â
http://ââeffective-altruism.com/ââea/ââ18u/ââintuition_jousting_what_it_is_and_why_it_should/ââamj
Three cheers for this. Two ways in which the post might understate the case for person-affecting focusing on ex risk
Most actions to reduce ex risk would also reduce catastrophic non-ex risks. e.g. efforts to reduce the risk of an existential threat attack by an engineered pathogen would also reduce the risk of e.g. >100m people dying in an attack by an engineered pathogen. I would expect that the benefits from reducing GCRs as a side-effect of reducing ex risks would be significantly larger than the benefits accruing from preventing ex risks because the probability of GCRs is much much greater. I wouldnât be that surprised if that increased the EV of ex risk by an order of magntidue, thereby propelling ex risk reduction further into AMF territory.
As I have noted before on this forum, most people advancing person-affecting views tend to opt for asymmetric versions where future bad lives matter but future good lives donât. If youâre temporally neutral and aggregative, then you end up with a moral theory which is practically exactly the same as negative utiltiarianism (priorities one two three four etc are preventing future suffering).
It is in general good to reassert that there are numerous reasons to focus on ex risk aside from the total view, including neglectedness, political short-termism, the global public goods aspect, the context of the techologies we are developing, the tendency to neglect rare events etc
If someone did take an asymmetric view and really committed to it, I would think you should probably be in favour of increasing existential risk, as that removes the possibility of future suffering, rather trying to reduce existential risk. I suppose you might have some (not obviously plausible) story you had about how humanityâs survival decreases future suffering: You could think humans will remove misery in surviving non-humans if humans dodge existential risk, but this misery wouldnât be averted if humans went extinct but other life keep living.
I think the argument is as you describe in the last sentence, though I havenât engaged much with the NUs on this.
thanks, gregory. itâs valuable to have numbers on this but i have some concerns about this argument and the spirit in which it is made:
1) most arguments for x-risk reduction make the controversial assumption that the future is very positive in expectation. this argument makes the (to my mind even more) controversial assumption that an arbitrary life-year added to a presently-existing person is very positive, on average. while it might be that many relatively wealthy euro-american EAs have life-years that are very positive, on average, itâs highly questionable whether the average human has life-years that are on average positive at all, let alone very positive.
2) many global catastrophic risks and extinction risks would affect not only humans but also many other sentient beings. insofar as these x-risks are risks of the extinction of not only humans but also nonhuman animals, to make a determination of the person-affecting value of deterring x-risks we must sum the value of preventing human death with the value of preventing nonhuman death. on the widely held assumption that farmed animals and wild animals have bad lives on average, and given the population of tens of billions of presently existing farmed animals and 10^13-10^22 presently existing wild animals, the value of the extinction of presently living nonhuman beings would likely swamp the (supposedly) negative value of the extinction of presently existing human beings. many of these animals would live a short period of time, sure, but their total life-years still vastly outnumber the remaining life-years of presently existing humans. moreover, most people who accept a largely person-affecting axiology also think that it is bad when we cause people with miserable lives to exist. so on most person-affecting axiologies, we would also need to sum the disvalue of the existence of future farmed and wild animals with the person-affecting value of human extinction. this may make the person-affecting value of preventing extinction extremely negative in expectation.
3) iâm concerned about this result being touted as a finding of a âhighly effectiveâ cause. $9,600/âlife-year is vanishingly small in comparison to many poverty interventions, let alone animal welfare interventions (where ACE estimates that this much money could save 100k+ animals from factory farming). why does $9,600/âlife-year suddenly make for a highly effective when weâre talking about x-risk reduction, when it isnât highly effective when weâre talking about other domains?
1) Happiness levels seem to trend strongly positive, given things like the world values survey (in the most recent wave â 2014, only Egypt had <50% of people reporting being either âhappyâ or âvery happyâ, although in fairness there were a lot of poorer countries with missing data. The association between wealth and happiness is there, but pretty weak (e.g. Zimbabwe gets 80+%, Bulgaria 55%). Given this (and when you throw in implied preferences, commonsensical intuitions whereby we donât wonder about whether we should jump in the pond to save the child as weâre genuinely uncertain it is good for them to extent their life), it seems the average human takes themselves to have a life worth living. (q.v.)
2) My understanding from essays by Shulman and Tomasik is that even intensive factory farming plausibly leads to a net reduction in animal populations, given a greater reduction in wild animals due to habitat reduction. So if human extinction leads to another ~100M years of wildlife, this looks pretty bad by asymmetric views.
Of course, these estimates are highly non-resilient even with respect to sign. Yet the objective of the essay wasnât to show the result was robust to all reasonable moral considerations, but that the value of x-risk reduction isnât wholly ablated on a popular view of population ethicsâsomewhat akin to how Givewell analysis on cash transfers donât try and factor in poor meat eater considerations.
3) I neither âtoutâ - nor even stateâthis is a finding that âxrisk reduction is highly effective for person-affecting viewsâ. Indeed, I say the opposite:
thanks for the clarification on (3), gregory. i exaggerated the strength of the valence on your post.
on (1), i think we should be skeptical about self-reports of well-being given the pollyanna principle (we may be evolutionarily hard-wired overestimate the value of our own lives).
on (2), my point was that extinction risks are rarely confined to only human beings, and events that cause human extinction will often also cause nonhuman extinction. but youâre right that for risks of exclusively human extinction we must also consider the impact of human extinction on other animals, and that impactâwhatever its valenceâmay also outside the impact of the event on human well-being.
Iâm surprised by your last point, since the article says:
This seems a far cry from the impression you seem to have gotten from the article. In fact your quote of âhighly effectiveâ is only used once, in the introduction, as a hypothetical motivation for crunching the numbers. (Since, a-priori, it could have turned out the cost effectiveness was 100 times higher, which would have been very cost effective).
On your first two points, my (admittedly not very justified) impression is the âdefaultâ opinons people typically have is that almost all human lives are positive, and that animal lives are extremely unimportant compared to humans. Whilst one can question the truth of these claims, writing an article aimed at the majority seems reasonable.
It might be that actually within EA the average opinion is closer to yours, and in any case I agree the assumptions should have been clearly stated somewhere, along with the fact he is taking the symmetric as opposed to asymmetric view etc.
Thanks for this. Other previous work includes my extension of the Oxford Prioritization Project AI model that actually showed better cost-effectiveness than AMF (near the bottom). And to Halsteadâs point, global catastrophic risk reduction is generally even higher cost effectiveness from the perspective of the present generation than existential risk reduction, such as this.
I believe thereâs a minor error in your Guesstimate model. The âproportional x-risk reductionâ should be 0.01, not 0.011, to correspond to the 1% proportional reduction mentioned in the post:
Edit: Actually, never mind. I see this is an artifact of inputting the reduction as an uncertain value from 0.005 to 0.02.
I think this is largely compensated by a rise in average life-expectancy.
Iâd also like to remark Bostromâs point in Astronomical Waste that extinction could prevent current people from living billions of years, and that this gives enough reason for person-affecting utilitarians to prioritize x-risk reduction.
From Bostrom (2003):
Perhaps interesting in this context: my current population ethical view of variable critical level utilitarianism https://ââstijnbruers.wordpress.com/ââ2018/ââ02/ââ24/ââvariable-critical-level-utilitarianism-as-the-solution-to-population-ethics/ââ
Edit: My comment is wrongâi had misread the price as ÂŁ1 billion as a one-off, but it is ÂŁ1 billion per year
Iâm not quite able to follow what role annualising the risk plays in your model, since as far as I can tell you seem to calculate your final cost effectiveness in terms purely of the risk reduction in 1 year. This seems like it should undercount the impact 100-fold.
e.g. if I skip annualising entirely, and just work in century blocks I get:
still 247 Billion Life years at stake
1% chance of x-risk, reduced to 0.99% by ÂŁ1 billion project X.
This expected ÂŁ per year of life at 10^9/â0.01%*247*10^9 = ~40, which is about 1â100 of your answer.
I might well have misunderstood some important part of your model, or be making some probability-related mistake.
The mistake might be on my part, but I think where this may be going wrong is I assume the cost needs to be repeated each year (i.e. you spent 1B to reduce risk by 1% in 2018, then have to spend another 1B to reduce risk by 1% in 2019). So if you assume a single 1B pulse reduces x risk across the century by 1%, then you do get 100 fold better results.
I mainly chose the device of some costly âproject Xâ as it is hard to get a handle on (e.g.) whether 10^-10 reduction in xrisk/â$ is a plausible figure or not. Given this, I might see if I can tweak the wording to make it clearerâor at least make any mistake I am making easier to diagnose.
Ah sorry yes you are rightâI had misread the cost as ÂŁ1 Billion total, not ÂŁ1 Billion per year!
Thanks for doing this. I definitely worry about the cause-selection fallacy where we go âX is the top cause if you believe theory T; I donât believe T, therefore X canât be my top causeâ.
A couple of points.
As youâve noted in the comments, you model this as $1bn total, rather than $1bn a year. Ignoring the fact that the person affecting advocate (PAA) only cares about present people (at time of initial decision to spend), if the cost-effectivnenes is even 10 lower then it probably no long counts as a good buy.
This is true, although whatever money you put towards the extinction project is likely to change all the identities, thus necessary people are effectively the same as present people. Even telling people âhey, weâre working on this X-risk projectâ is enough to change all future identities.
If you wanted to pump up the numbers, you could claim that advances in aging will mean present people will live a lot longer â 200 years rather than 70. This strikes me as reasonable, at least when presented as an alternative, more optimistic calculation.
Youâre implicitly using the life-comparative account of the badness of deathâthe badness of your death is equal to amount of happiness you would have had if youâd lived. On this view, itâs much more valuable to save the lives of very young people, i.e. whenever they count as a person, say 6 months after conception, or something. However, most PAAs, as far I can tell, take the Time-Relative Interest Account (TRIA) of the badness of death, which holds itâs better to save a 20-year old than a 2-year old because the the 2-year old doesnât yet have interests in continuing to live. On TRIA, abortion isnât a problem, whereas itâs a big loss on the life-comparative (assuming the foetus is terminated after personhood). This interests stuff is usually cashed out, at least by Jeff McMahan, in terms of Parfitian ideas about personal identity (apologies to those who arenât familiar with this shorthand). On TRIA, the value of saving a life is the happiness it would have had times the psychological continuity with oneâs future self. Very young people, e.g. babies, have basically no psychological continuity so saving their lives isnât important. But people keep changing over time: 20-year old is quite psychological distinct from the 80-year old. On TRIA, we need to factor that in too. This fact seems to be overlooked in the literature, but on TRIA you apply a discount to the future based on this change in psychological continuity. To push the point, suppose we say that everyoneâs psychology totally changes over the course of 10 years. Then TRIA advocates wonât care what happens in 10 years time. Hence PAAs who like TRIA, which, as I say, seems to be most of them, will discount the value of the future much more steeply than PAA who endores the life-comparative account. Upshot: if someone takes TRIA seriouslyâwhich no one should btwâand knows what it implies, youâll really struggle to convince them X-risk is important on your estimate.
Finally, anyone who endorses the procreative asymmetryâcreating happy people is neutral, creating unhappy people is badâwill want to try to increase x-risk and blow up the world. Why? Well, the future can only be bad: the happy lives donât count as good, and the unhappy lives will count as bad. Halstead discusses this here, if I recall correctly. Itâs true, on the asymmetry, avoiding x-risk would be good regarding current people, but increasing x-risk will be good regarding future people, as it will stop their being any of them. And as X-risk (reduction) enthusiasts are keen to point out, there is potentially a lot of future still to come.
No, in my comments I note precisely the opposite. The model assumes 1B per year. If the cost is 1B total to reduce risk for the subsequent century, the numbers get more optimistic (100x optimistic if you buy counterpart-y views, but still somewhat better if you discount the benefit in future years by how many from the initial cohort remain alive).
Further, the model is time-uniform, so it can collapse into a âI can spend 1B in 2018 to reduce xrisk in this year by 1% from a 0.01% baseline, and the same number gets spit out. So if a PAA buys these numbers (as Alex says, I think my offers skew conservative to xrisk consensus if we take them as amortized across-century risk, they might be about right/ââoptimisticâ if they are taken as an estimate for this year alone), this looks an approximately good buy.
Population ethics generally, and PA views within them, are far from my expertise. I guess Iâd be surprised if pricing by TRIA gives a huge discount, as I take most people consider themselves pretty psychologically continuous from the ages of ~15 onwards. If this isnât true, or consensus view amongst PAAs is âTRIA, and weâre mistaken to our degree of psychological continuityâ, then this plausibly shaves off an order of magnitude-ish and plonks it more in the âprobably not a good buyâ category.
In which case Iâm not understanding your model. The âCost per life yearâ box is $1bn/âEV. How is that not a one off of $1bn? What have I missed?
As noted above, if people only live 70 years, then on PAA thereâs no point wondering what happens after 70 years.
yeah, I donât think people have looked at this enough to form views on the figure. McMahan does want to discount future wellbeing for people by some amount, but is reluctant to be pushed into giving a number. Iâd guess itâs something like 2% a year. The effect something like assuming a 2% pure time discount.
The EV in question is the reduction in x-risk for a single year, not across the century. Iâll change the wording to make this clearer.
Ah. So the EV is for a single year. But I still only see $1bn. So your number is âthis is the cost per life year saved if we spend the money this year and it causes an instanteous reduction in X-risk for this yearâ?
So your figure is the cost effectiveness of reducing instanteous X-risk at Tn, where Tn is now, whenever now is. But itâs not the cost effectiveness of that reduction at Tf, where Tf is some year in the future, because the further in the future this occurs, the less the EV is on PAA. If Iâm wondering what the cost-effectiveness, from the perspective of T0, it would be to spend $1bn in 10 years and cause a reduction at T10, on your model I increase the mean age by 10 years to 48, the average cost per year become $12k. From the perspective of T10, reducing X-risk in the way you say at T10 is, again $9k.
By contrast, for totalists the calculations would be the same (excepting inflation, etc.).
Also, not sure why my comment was downvoted. I wasnât being rude (or, I think, stupid) and I think itâs unhelpful to downvote without explanation as it just looks petty and feels unfriendly.
I didnât downvote, but:
The last two sentences of this come across as pretty curt to me. I think there is a wide range in how people interpret things like these, so it is probably just a bit of a communication style mismatch. (I think I have noticed a myself having a similar reaction to a few of your comments before where I donât think you meant any rudeness).
I agree with this on some level, but Iâm not sure I want there to be uneven costs to upvoting/âdownvoting content. I think there is also an unfriendliness vs. enforcing standards tradeoff where the marginal decisions will typically look petty.
Yeah, on re-reading, the âHow is that not a one off of $1bn?â does seem snippy. Okay. Fair cop.
I didnât see it as all that snipey. I think downvotes should be reserved for more severe tonal misdemeanours than this.
Thereâs a bit of difficult balance between necessary policing of tone and engagement with substantive arguments. I think as a rule people tend to talk about tone too much in arguments to the detriment of talking about the substance.
It would also have the same (or worse) effect on other things that save lives (e.g. AMF) so it is not totally clear how much worse x-risk would look compared to everything else. (Although perhaps e.g. deworming would come out very well, if it just reduces suffering for a short-ish timescale. (The fact that it mostly effects children might sway things the other way though!))
I agree. As I said here, TRIA implies you should care much less about saving young lives. The upshot for TRIA vs PAA combined with the life-comparative account is you should focused more on improving lives than saving lives if you like TRIA.>
Just on this note, GiveWell claim only 2% of the value of deworming comes from short term health benefits and 98% from economic gains (see their latest cost-effectiveness spreadsheet), so they donât think the value is on the suffering-reducing end.
I have heard surprisingly many non-philosophers argue for the Epicurean view: that death is not bad for the individual because thereâs no one for it to be bad for. They would argue that death is only bad because others will have grief and other negative consequences. However, in a painless extinction event this would not be bad at all.
This is all to say that oneâs conception of the badness of death indeed matters a lot for the negative value of extinction.
Ah good point! Yes, I didnât mention this for some reason, although I should have. Indeed, if (like me) youâre sympathetic to the person-affecting views of population ethics and Epicureanism about the badness of death, then the only reason to reduce X-risk would be to reducing the suffering to currently living people during their lifetimes. In short, X-risk would not be much of a priority of this combination but thatâs basically pretty obvious if you hold this combination of views.