Thanks for doing this. I definitely worry about the cause-selection fallacy where we go “X is the top cause if you believe theory T; I don’t believe T, therefore X can’t be my top cause”.
A couple of points.
As you’ve noted in the comments, you model this as $1bn total, rather than $1bn a year. Ignoring the fact that the person affecting advocate (PAA) only cares about present people (at time of initial decision to spend), if the cost-effectivnenes is even 10 lower then it probably no long counts as a good buy.
Other person affecting views consider people who will necessarily exist (however cashed out) rather than whether they happen to exist now (planting a bomb with a timer of 1000 years is still accrues person-affecting harm). In a ‘extinction in 100 years’ scenario, this view would still count the harm of everyone alive then who dies, although still discount the foregone benefit of people who ‘could have been’ subsequently in the moral calculus
This is true, although whatever money you put towards the extinction project is likely to change all the identities, thus necessary people are effectively the same as present people. Even telling people “hey, we’re working on this X-risk project” is enough to change all future identities.
If you wanted to pump up the numbers, you could claim that advances in aging will mean present people will live a lot longer − 200 years rather than 70. This strikes me as reasonable, at least when presented as an alternative, more optimistic calculation.
You’re implicitly using the life-comparative account of the badness of death—the badness of your death is equal to amount of happiness you would have had if you’d lived. On this view, it’s much more valuable to save the lives of very young people, i.e. whenever they count as a person, say 6 months after conception, or something. However, most PAAs, as far I can tell, take the Time-Relative Interest Account (TRIA) of the badness of death, which holds it’s better to save a 20-year old than a 2-year old because the the 2-year old doesn’t yet have interests in continuing to live. On TRIA, abortion isn’t a problem, whereas it’s a big loss on the life-comparative (assuming the foetus is terminated after personhood). This interests stuff is usually cashed out, at least by Jeff McMahan, in terms of Parfitian ideas about personal identity (apologies to those who aren’t familiar with this shorthand). On TRIA, the value of saving a life is the happiness it would have had times the psychological continuity with one’s future self. Very young people, e.g. babies, have basically no psychological continuity so saving their lives isn’t important. But people keep changing over time: 20-year old is quite psychological distinct from the 80-year old. On TRIA, we need to factor that in too. This fact seems to be overlooked in the literature, but on TRIA you apply a discount to the future based on this change in psychological continuity. To push the point, suppose we say that everyone’s psychology totally changes over the course of 10 years. Then TRIA advocates won’t care what happens in 10 years time. Hence PAAs who like TRIA, which, as I say, seems to be most of them, will discount the value of the future much more steeply than PAA who endores the life-comparative account. Upshot: if someone takes TRIA seriously—which no one should btw—and knows what it implies, you’ll really struggle to convince them X-risk is important on your estimate.
Finally, anyone who endorses the procreative asymmetry—creating happy people is neutral, creating unhappy people is bad—will want to try to increase x-risk and blow up the world. Why? Well, the future can only be bad: the happy lives don’t count as good, and the unhappy lives will count as bad. Halstead discusses this here, if I recall correctly. It’s true, on the asymmetry, avoiding x-risk would be good regarding current people, but increasing x-risk will be good regarding future people, as it will stop their being any of them. And as X-risk (reduction) enthusiasts are keen to point out, there is potentially a lot of future still to come.
As you’ve noted in the comments, you model this as $1bn total, rather than $1bn a year. Ignoring the fact that the person affecting advocate (PAA) only cares about present people (at time of initial decision to spend), if the cost-effectivneness is even 10 lower then it probably no long counts as a good buy.
No, in my comments I note precisely the opposite. The model assumes 1B per year. If the cost is 1B total to reduce risk for the subsequent century, the numbers get more optimistic (100x optimistic if you buy counterpart-y views, but still somewhat better if you discount the benefit in future years by how many from the initial cohort remain alive).
Further, the model is time-uniform, so it can collapse into a ’I can spend 1B in 2018 to reduce xrisk in this year by 1% from a 0.01% baseline, and the same number gets spit out. So if a PAA buys these numbers (as Alex says, I think my offers skew conservative to xrisk consensus if we take them as amortized across-century risk, they might be about right/‘optimistic’ if they are taken as an estimate for this year alone), this looks an approximately good buy.
Population ethics generally, and PA views within them, are far from my expertise. I guess I’d be surprised if pricing by TRIA gives a huge discount, as I take most people consider themselves pretty psychologically continuous from the ages of ~15 onwards. If this isn’t true, or consensus view amongst PAAs is “TRIA, and we’re mistaken to our degree of psychological continuity”, then this plausibly shaves off an order of magnitude-ish and plonks it more in the ‘probably not a good buy’ category.
In which case I’m not understanding your model. The ‘Cost per life year’ box is $1bn/EV. How is that not a one off of $1bn? What have I missed?
If the cost is 1B total to reduce risk for the subsequent century
As noted above, if people only live 70 years, then on PAA there’s no point wondering what happens after 70 years.
I guess I’d be surprised if pricing by TRIA gives a huge discount
yeah, I don’t think people have looked at this enough to form views on the figure. McMahan does want to discount future wellbeing for people by some amount, but is reluctant to be pushed into giving a number. I’d guess it’s something like 2% a year. The effect something like assuming a 2% pure time discount.
Ah. So the EV is for a single year. But I still only see $1bn. So your number is “this is the cost per life year saved if we spend the money this year and it causes an instanteous reduction in X-risk for this year”?
So your figure is the cost effectiveness of reducing instanteous X-risk at Tn, where Tn is now, whenever now is. But it’s not the cost effectiveness of that reduction at Tf, where Tf is some year in the future, because the further in the future this occurs, the less the EV is on PAA. If I’m wondering what the cost-effectiveness, from the perspective of T0, it would be to spend $1bn in 10 years and cause a reduction at T10, on your model I increase the mean age by 10 years to 48, the average cost per year become $12k. From the perspective of T10, reducing X-risk in the way you say at T10 is, again $9k.
By contrast, for totalists the calculations would be the same (excepting inflation, etc.).
Also, not sure why my comment was downvoted. I wasn’t being rude (or, I think, stupid) and I think it’s unhelpful to downvote without explanation as it just looks petty and feels unfriendly.
Also, not sure why my comment was downvoted. I wasn’t being rude (or, I think, stupid) and I think it’s unhelpful to downvote without explanation as it just looks petty and feels unfriendly.
I didn’t downvote, but:
In which case I’m not understanding your model. The ‘Cost per life year’ box is $1bn/EV. How is that not a one off of $1bn? What have I missed?
The last two sentences of this come across as pretty curt to me. I think there is a wide range in how people interpret things like these, so it is probably just a bit of a communication style mismatch. (I think I have noticed a myself having a similar reaction to a few of your comments before where I don’t think you meant any rudeness).
I think it’s unhelpful to downvote without explanation as it just looks petty and feels unfriendly.
I agree with this on some level, but I’m not sure I want there to be uneven costs to upvoting/downvoting content. I think there is also an unfriendliness vs. enforcing standards tradeoff where the marginal decisions will typically look petty.
I didn’t see it as all that snipey. I think downvotes should be reserved for more severe tonal misdemeanours than this.
There’s a bit of difficult balance between necessary policing of tone and engagement with substantive arguments. I think as a rule people tend to talk about tone too much in arguments to the detriment of talking about the substance.
If this isn’t true, or consensus view amongst PAAs is “TRIA, and we’re mistaken to our degree of psychological continuity”, then this plausibly shaves off an order of magnitude-ish and plonks it more in the ‘probably not a good buy’ category.
It would also have the same (or worse) effect on other things that save lives (e.g. AMF) so it is not totally clear how much worse x-risk would look compared to everything else. (Although perhaps e.g. deworming would come out very well, if it just reduces suffering for a short-ish timescale. (The fact that it mostly effects children might sway things the other way though!))
It would also have the same (or worse) effect on other things that save lives (e.g. AMF)
I agree. As I said here, TRIA implies you should care much less about saving young lives. The upshot for TRIA vs PAA combined with the life-comparative account is you should focused more on improving lives than saving lives if you like TRIA.>
Although perhaps e.g. deworming would come out very well, if it just reduces suffering for a short-ish timescale
Just on this note, GiveWell claim only 2% of the value of deworming comes from short term health benefits and 98% from economic gains (see their latest cost-effectiveness spreadsheet), so they don’t think the value is on the suffering-reducing end.
You’re implicitly using the life-comparative account of the badness of death—the badness of your death is equal to amount of happiness you would have had if you’d lived.
I have heard surprisingly many non-philosophers argue for the Epicurean view: that death is not bad for the individual because there’s no one for it to be bad for. They would argue that death is only bad because others will have grief and other negative consequences. However, in a painless extinction event this would not be bad at all.
This is all to say that one’s conception of the badness of death indeed matters a lot for the negative value of extinction.
Ah good point! Yes, I didn’t mention this for some reason, although I should have. Indeed, if (like me) you’re sympathetic to the person-affecting views of population ethics and Epicureanism about the badness of death, then the only reason to reduce X-risk would be to reducing the suffering to currently living people during their lifetimes. In short, X-risk would not be much of a priority of this combination but that’s basically pretty obvious if you hold this combination of views.
Thanks for doing this. I definitely worry about the cause-selection fallacy where we go “X is the top cause if you believe theory T; I don’t believe T, therefore X can’t be my top cause”.
A couple of points.
As you’ve noted in the comments, you model this as $1bn total, rather than $1bn a year. Ignoring the fact that the person affecting advocate (PAA) only cares about present people (at time of initial decision to spend), if the cost-effectivnenes is even 10 lower then it probably no long counts as a good buy.
This is true, although whatever money you put towards the extinction project is likely to change all the identities, thus necessary people are effectively the same as present people. Even telling people “hey, we’re working on this X-risk project” is enough to change all future identities.
If you wanted to pump up the numbers, you could claim that advances in aging will mean present people will live a lot longer − 200 years rather than 70. This strikes me as reasonable, at least when presented as an alternative, more optimistic calculation.
You’re implicitly using the life-comparative account of the badness of death—the badness of your death is equal to amount of happiness you would have had if you’d lived. On this view, it’s much more valuable to save the lives of very young people, i.e. whenever they count as a person, say 6 months after conception, or something. However, most PAAs, as far I can tell, take the Time-Relative Interest Account (TRIA) of the badness of death, which holds it’s better to save a 20-year old than a 2-year old because the the 2-year old doesn’t yet have interests in continuing to live. On TRIA, abortion isn’t a problem, whereas it’s a big loss on the life-comparative (assuming the foetus is terminated after personhood). This interests stuff is usually cashed out, at least by Jeff McMahan, in terms of Parfitian ideas about personal identity (apologies to those who aren’t familiar with this shorthand). On TRIA, the value of saving a life is the happiness it would have had times the psychological continuity with one’s future self. Very young people, e.g. babies, have basically no psychological continuity so saving their lives isn’t important. But people keep changing over time: 20-year old is quite psychological distinct from the 80-year old. On TRIA, we need to factor that in too. This fact seems to be overlooked in the literature, but on TRIA you apply a discount to the future based on this change in psychological continuity. To push the point, suppose we say that everyone’s psychology totally changes over the course of 10 years. Then TRIA advocates won’t care what happens in 10 years time. Hence PAAs who like TRIA, which, as I say, seems to be most of them, will discount the value of the future much more steeply than PAA who endores the life-comparative account. Upshot: if someone takes TRIA seriously—which no one should btw—and knows what it implies, you’ll really struggle to convince them X-risk is important on your estimate.
Finally, anyone who endorses the procreative asymmetry—creating happy people is neutral, creating unhappy people is bad—will want to try to increase x-risk and blow up the world. Why? Well, the future can only be bad: the happy lives don’t count as good, and the unhappy lives will count as bad. Halstead discusses this here, if I recall correctly. It’s true, on the asymmetry, avoiding x-risk would be good regarding current people, but increasing x-risk will be good regarding future people, as it will stop their being any of them. And as X-risk (reduction) enthusiasts are keen to point out, there is potentially a lot of future still to come.
No, in my comments I note precisely the opposite. The model assumes 1B per year. If the cost is 1B total to reduce risk for the subsequent century, the numbers get more optimistic (100x optimistic if you buy counterpart-y views, but still somewhat better if you discount the benefit in future years by how many from the initial cohort remain alive).
Further, the model is time-uniform, so it can collapse into a ’I can spend 1B in 2018 to reduce xrisk in this year by 1% from a 0.01% baseline, and the same number gets spit out. So if a PAA buys these numbers (as Alex says, I think my offers skew conservative to xrisk consensus if we take them as amortized across-century risk, they might be about right/‘optimistic’ if they are taken as an estimate for this year alone), this looks an approximately good buy.
Population ethics generally, and PA views within them, are far from my expertise. I guess I’d be surprised if pricing by TRIA gives a huge discount, as I take most people consider themselves pretty psychologically continuous from the ages of ~15 onwards. If this isn’t true, or consensus view amongst PAAs is “TRIA, and we’re mistaken to our degree of psychological continuity”, then this plausibly shaves off an order of magnitude-ish and plonks it more in the ‘probably not a good buy’ category.
In which case I’m not understanding your model. The ‘Cost per life year’ box is $1bn/EV. How is that not a one off of $1bn? What have I missed?
As noted above, if people only live 70 years, then on PAA there’s no point wondering what happens after 70 years.
yeah, I don’t think people have looked at this enough to form views on the figure. McMahan does want to discount future wellbeing for people by some amount, but is reluctant to be pushed into giving a number. I’d guess it’s something like 2% a year. The effect something like assuming a 2% pure time discount.
The EV in question is the reduction in x-risk for a single year, not across the century. I’ll change the wording to make this clearer.
Ah. So the EV is for a single year. But I still only see $1bn. So your number is “this is the cost per life year saved if we spend the money this year and it causes an instanteous reduction in X-risk for this year”?
So your figure is the cost effectiveness of reducing instanteous X-risk at Tn, where Tn is now, whenever now is. But it’s not the cost effectiveness of that reduction at Tf, where Tf is some year in the future, because the further in the future this occurs, the less the EV is on PAA. If I’m wondering what the cost-effectiveness, from the perspective of T0, it would be to spend $1bn in 10 years and cause a reduction at T10, on your model I increase the mean age by 10 years to 48, the average cost per year become $12k. From the perspective of T10, reducing X-risk in the way you say at T10 is, again $9k.
By contrast, for totalists the calculations would be the same (excepting inflation, etc.).
Also, not sure why my comment was downvoted. I wasn’t being rude (or, I think, stupid) and I think it’s unhelpful to downvote without explanation as it just looks petty and feels unfriendly.
I didn’t downvote, but:
The last two sentences of this come across as pretty curt to me. I think there is a wide range in how people interpret things like these, so it is probably just a bit of a communication style mismatch. (I think I have noticed a myself having a similar reaction to a few of your comments before where I don’t think you meant any rudeness).
I agree with this on some level, but I’m not sure I want there to be uneven costs to upvoting/downvoting content. I think there is also an unfriendliness vs. enforcing standards tradeoff where the marginal decisions will typically look petty.
Yeah, on re-reading, the “How is that not a one off of $1bn?” does seem snippy. Okay. Fair cop.
I didn’t see it as all that snipey. I think downvotes should be reserved for more severe tonal misdemeanours than this.
There’s a bit of difficult balance between necessary policing of tone and engagement with substantive arguments. I think as a rule people tend to talk about tone too much in arguments to the detriment of talking about the substance.
It would also have the same (or worse) effect on other things that save lives (e.g. AMF) so it is not totally clear how much worse x-risk would look compared to everything else. (Although perhaps e.g. deworming would come out very well, if it just reduces suffering for a short-ish timescale. (The fact that it mostly effects children might sway things the other way though!))
I agree. As I said here, TRIA implies you should care much less about saving young lives. The upshot for TRIA vs PAA combined with the life-comparative account is you should focused more on improving lives than saving lives if you like TRIA.>
Just on this note, GiveWell claim only 2% of the value of deworming comes from short term health benefits and 98% from economic gains (see their latest cost-effectiveness spreadsheet), so they don’t think the value is on the suffering-reducing end.
I have heard surprisingly many non-philosophers argue for the Epicurean view: that death is not bad for the individual because there’s no one for it to be bad for. They would argue that death is only bad because others will have grief and other negative consequences. However, in a painless extinction event this would not be bad at all.
This is all to say that one’s conception of the badness of death indeed matters a lot for the negative value of extinction.
Ah good point! Yes, I didn’t mention this for some reason, although I should have. Indeed, if (like me) you’re sympathetic to the person-affecting views of population ethics and Epicureanism about the badness of death, then the only reason to reduce X-risk would be to reducing the suffering to currently living people during their lifetimes. In short, X-risk would not be much of a priority of this combination but that’s basically pretty obvious if you hold this combination of views.