To some, it’s just obvious that future lives have value and the highest priority is fighting existential threats to humanity (‘X-risks’).
I realize this is just an example, but I want to mention as a side-note that I find it weird what a common framing this is. AFAIK almost everyone working on existential risk think it’s a serious concern in our lifetimes, not specifically a “far future” issue or one that turns on whether it’s good to create new people.
As an example of what I have in mind, I don’t understand why the GCR-focused EA Fund is framed as a “long-term future” fund (unless I’m misunderstanding the kinds of GCR interventions it’s planning to focus on), or why philosophical stances like the person-affecting view and presentism are foregrounded. The natural things I’d expect to be foregrounded are factual questions about the probability and magnitude over the coming decades of the specific GCRs EAs are most worried about.
Agree that GCRs are a within-our-lifetime problem. But in my view mitigating GCRs is unlikely to be the optimal donation target if you are only considering the impact on beings alive today. Do you know of any sources that make the opposite case?
And it’s framed as long-run future because we think that there are potentially lots of things that could have a huge positive on the value of the long-run future which aren’t GCRs—like humanity having the right values, for example.
Someone taking a hard ‘inside view’ about AI risk could reasonably view it as better than AMF for people alive now, or during the rest of their lives. I’m thinking something like:
1 in 10 risk of AI killing everyone within the next 50 years. Spending an extra $1 billion on safety research could reduce the size of this risk by 1%.
$1 billion / (0.1 risk reduced by 1% 8 billion lives) = $125 per life saved. Compares with $3,000-7,000+ for AMF.
This is before considering any upside from improved length or quality of life for the present generation as a result of a value-aligned AI.
I’m probably not quite as optimistic as this, but I still prefer AI as a cause over poverty reduction, for the purposes of helping the present generation (and those remaining to be born during my lifetime).
That’s reasonable, though if the aim is just “benefits over the next 50 years” I think that campaigns against factory farming seem like the stronger comparison:
“We’ve estimated that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. If we roughly imagine that each hen gains two years of 25%-improved life, this is equivalent to one hen-life-year for every $0.01 spent.”
“One could, of course, value chickens while valuing humans more. If one values humans 10-100x as much, this still implies that corporate campaigns are a far better use of funds (100-1,000x) [So $30-ish per equivalent life saved].”
http://www.openphilanthropy.org/blog/worldview-diversification
And to clarify my first comment, “unlikely to be optimal” = I think it’s a contender, but the base rate for “X is an optimal intervention” is really low.
“if you are only considering the impact on beings alive today...factory farming”
The interventions you are discussing don’t help any beings alive at the time, but only affect the conditions (or existence) of future ones. In particular cage-free campaigns, and campaigns for slower growth-genetics and lower crowding among chickens raised for meat are all about changing the conditions into which future chickens will be born, and don’t involve moving any particular chickens from the old to new systems.
I.e. the case for those interventions already involves rejecting a strong presentist view.
“That’s reasonable, though if the aim is just “benefits over the next 50 years” I think that campaigns against factory farming seem like the stronger comparison:”
Suppose there’s an intelligence explosion in 30 years (not wildly unlikely in expert surveys), and expansion of population by 3-12 orders of magnitude over the next 10 years (with AI life of various kinds outnumbering both human and non-human animals today, with vastly more total computation). Then almost all the well-being of the next 50 years lies in that period.
Also in that scenario existing beings could enjoy accelerated subjective speed of thought and greatly enhanced well-being, so most of the QALY-equivalents for long-lived existing beings could lie there.
Agree with the above, but wanted to ask: what do you mean by a ‘strong presentist’ view? I’ve not heard/seen the term and am unsure what it is contrasted with.
Is ‘weak presentism’ that you give some weight to non-presently existing people, ‘strong presentism’ that you give none?
Why does this confusion persist among long-time EA thought leaders after many years of hashing out the relevant very simple principles? “Beings currently alive” is a judgment about which changes are good in principle, “benefits the next 50 years” is an entirely different pragmatic scope limitation, and people keep bringing up the first in defense of things that can only really be justified by the second.
I understand how someone could be initially confused about this—I was too, initially. But, it seems like the right thing to do once corrected is to actually update your model of the world so you don’t generate the error again. Presentism without negative utilitarianism suggests that we should focus on some combination of curing aging, real wealth creation sufficient to extend this benefit to as many currently alive people as we can, and preventing deaths before we manage to extend this benefit, including due to GCRs likely to happen during the lives of currently living beings.
As it is, we’re not making intellectual progress, since the same errors keep popping up, and we’re not generating actions based on the principles we’re talking about, since people keep bringing up principles that don’t actually recommend the relevant actions. What are we doing, then, when we talk about moral principles?
To add on to this, I think the view you’re referring to is presentism combined with the deprivationism view on death: presentism = only presently alive people matter + deprivationism = the badness of death is the ammount of happiness the person would have had.
You could be, say, a presentist (or other person-affecting view) and combined with say, Epicureanism about death. That would hold only presently alive people matter and there’s no badness in death, and hence no value in extending lives.
If that were your view you’d focus on the suffering of presently humans instead. Probably mental illness or chronic pain. Maybe social isolation if you had a really neat intervention.
But yeah, you’re right that person-affecting views doesn’t capture the intuitive badnes of animal suffering. You could still be a presentist and v*gan on environmental grounds.
And I agree that presentism + deprivationism suggests trying to cure aging is very important and, depending on details, could have higher EV than suffering relief. I’m unclear that real wealth creation would do very much due to hedonic adaptation and social comparison challenges.
And it’s framed as long-run future because we think that there are potentially lots of things that could have a huge positive on the value of the long-run future which aren’t GCRs—like humanity having the right values, for example.
I don’t have much to add to what Rob W and Carl said, but I’ll note that Bostrom defined “existential risk” like this back in 2008:
A subset of global catastrophic risks is existential risks. An existential risk is one that threatens to cause the extinction of Earth-originating intelligent life or to reduce its quality of life (compared to what would otherwise have been possible) permanently and drastically.
Presumably we should replace “intelligent” here with “sentient” or similar. The reason I’m quoting this is that on the above definition, it sounds like any potential future event or process that would cost us a large portion of the future’s value counts as an xrisk (and therefore as a GCR). ‘Humanity’s moral progress stagnates or we otherwise end up with the wrong values’ sounds like a global catastrophic risk to me, on that definition. (From a perspective that does care about long-term issues, at least.)
I’ll note that I think there’s at least some disagreement at FHI / Open Phil / etc. about how best to define terms like “GCR”, and I don’t know if there’s currently a consensus or what that consensus is. Also worth noting that the “risk” part is more clearly relevant than the “global catastrophe” part—malaria and factory farming are arguably global catastrophes in Bostrom’s sense, but they aren’t “risks” in the relevant sense, because they’re already occurring.
My understanding: GCR = (something like) risk of major catastrophe that kills 100mn+ people
(I think the GCR book defines it as risk of 10mn+ deaths, but that seemed too low to me).
So, as I was using the term, something being an x-risk does not entail it being a GCR. I’d count ‘Humanity’s moral progress stagnates or we otherwise end up with the wrong values’ as an x-risk but not a GCR.
Interesting (/worrying!) how we’re understanding widely-used terms so differently.
Agree that that’s the most common operationalization of a GCR. It’s a bit inelegant for GCR not to include all x-risks though, especially given that it is used interchangeably within EA.
It would odd if the onset of a permanently miserable dictatorship didn’t count as a global catastrophe because no lives were lost.
Could you or Will provide an example of a source that explicitly uses “GCR” and “xrisk” in such a way that there are non-GCR xrisks? You say this is the most common operationalization, but I’m only finding examples that treat xrisk as a subset of GCR, as the Bostrom quote above does.
You’re right, it looks like most written texts, especially more formal ones give definitions where x-risks are equal or a strict subset. We should probably just try to roll that out to informal discussions and operationalisations too.
“Definition: Global Catastrophic Risk – risk of events or processes that would lead to the deaths of approximately a tenth of the world’s population, or have a comparable impact.” GCR Report
“A global catastrophic risk is a hypothetical future event that has the potential to damage human well-being on a global scale.” -
Wiki
“Global catastrophic risk (GCR) is the risk of events large enough to significantly harm or even destroy human civilization at the global scale.” GCRI
“These represent global catastrophic risks—events that might kill a tenth of the world’s population.”—HuffPo
I realize this is just an example, but I want to mention as a side-note that I find it weird what a common framing this is. AFAIK almost everyone working on existential risk think it’s a serious concern in our lifetimes, not specifically a “far future” issue or one that turns on whether it’s good to create new people.
As an example of what I have in mind, I don’t understand why the GCR-focused EA Fund is framed as a “long-term future” fund (unless I’m misunderstanding the kinds of GCR interventions it’s planning to focus on), or why philosophical stances like the person-affecting view and presentism are foregrounded. The natural things I’d expect to be foregrounded are factual questions about the probability and magnitude over the coming decades of the specific GCRs EAs are most worried about.
Agree that GCRs are a within-our-lifetime problem. But in my view mitigating GCRs is unlikely to be the optimal donation target if you are only considering the impact on beings alive today. Do you know of any sources that make the opposite case?
And it’s framed as long-run future because we think that there are potentially lots of things that could have a huge positive on the value of the long-run future which aren’t GCRs—like humanity having the right values, for example.
Someone taking a hard ‘inside view’ about AI risk could reasonably view it as better than AMF for people alive now, or during the rest of their lives. I’m thinking something like:
1 in 10 risk of AI killing everyone within the next 50 years. Spending an extra $1 billion on safety research could reduce the size of this risk by 1%.
$1 billion / (0.1 risk reduced by 1% 8 billion lives) = $125 per life saved. Compares with $3,000-7,000+ for AMF.
This is before considering any upside from improved length or quality of life for the present generation as a result of a value-aligned AI.
I’m probably not quite as optimistic as this, but I still prefer AI as a cause over poverty reduction, for the purposes of helping the present generation (and those remaining to be born during my lifetime).
That’s reasonable, though if the aim is just “benefits over the next 50 years” I think that campaigns against factory farming seem like the stronger comparison:
“We’ve estimated that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. If we roughly imagine that each hen gains two years of 25%-improved life, this is equivalent to one hen-life-year for every $0.01 spent.” “One could, of course, value chickens while valuing humans more. If one values humans 10-100x as much, this still implies that corporate campaigns are a far better use of funds (100-1,000x) [So $30-ish per equivalent life saved].” http://www.openphilanthropy.org/blog/worldview-diversification
And to clarify my first comment, “unlikely to be optimal” = I think it’s a contender, but the base rate for “X is an optimal intervention” is really low.
“if you are only considering the impact on beings alive today...factory farming”
The interventions you are discussing don’t help any beings alive at the time, but only affect the conditions (or existence) of future ones. In particular cage-free campaigns, and campaigns for slower growth-genetics and lower crowding among chickens raised for meat are all about changing the conditions into which future chickens will be born, and don’t involve moving any particular chickens from the old to new systems.
I.e. the case for those interventions already involves rejecting a strong presentist view.
“That’s reasonable, though if the aim is just “benefits over the next 50 years” I think that campaigns against factory farming seem like the stronger comparison:”
Suppose there’s an intelligence explosion in 30 years (not wildly unlikely in expert surveys), and expansion of population by 3-12 orders of magnitude over the next 10 years (with AI life of various kinds outnumbering both human and non-human animals today, with vastly more total computation). Then almost all the well-being of the next 50 years lies in that period.
Also in that scenario existing beings could enjoy accelerated subjective speed of thought and greatly enhanced well-being, so most of the QALY-equivalents for long-lived existing beings could lie there.
Mea culpa that I switched from “impact on beings alive today” to “benefits over the next 50 years” without noticing.
Agree with the above, but wanted to ask: what do you mean by a ‘strong presentist’ view? I’ve not heard/seen the term and am unsure what it is contrasted with.
Is ‘weak presentism’ that you give some weight to non-presently existing people, ‘strong presentism’ that you give none?
“Is ‘weak presentism’ that you give some weight to non-presently existing people, ‘strong presentism’ that you give none?”
In my comment, yes.
Why does this confusion persist among long-time EA thought leaders after many years of hashing out the relevant very simple principles? “Beings currently alive” is a judgment about which changes are good in principle, “benefits the next 50 years” is an entirely different pragmatic scope limitation, and people keep bringing up the first in defense of things that can only really be justified by the second.
I understand how someone could be initially confused about this—I was too, initially. But, it seems like the right thing to do once corrected is to actually update your model of the world so you don’t generate the error again. Presentism without negative utilitarianism suggests that we should focus on some combination of curing aging, real wealth creation sufficient to extend this benefit to as many currently alive people as we can, and preventing deaths before we manage to extend this benefit, including due to GCRs likely to happen during the lives of currently living beings.
As it is, we’re not making intellectual progress, since the same errors keep popping up, and we’re not generating actions based on the principles we’re talking about, since people keep bringing up principles that don’t actually recommend the relevant actions. What are we doing, then, when we talk about moral principles?
To add on to this, I think the view you’re referring to is presentism combined with the deprivationism view on death: presentism = only presently alive people matter + deprivationism = the badness of death is the ammount of happiness the person would have had.
You could be, say, a presentist (or other person-affecting view) and combined with say, Epicureanism about death. That would hold only presently alive people matter and there’s no badness in death, and hence no value in extending lives.
If that were your view you’d focus on the suffering of presently humans instead. Probably mental illness or chronic pain. Maybe social isolation if you had a really neat intervention.
But yeah, you’re right that person-affecting views doesn’t capture the intuitive badnes of animal suffering. You could still be a presentist and v*gan on environmental grounds.
And I agree that presentism + deprivationism suggests trying to cure aging is very important and, depending on details, could have higher EV than suffering relief. I’m unclear that real wealth creation would do very much due to hedonic adaptation and social comparison challenges.
I don’t have much to add to what Rob W and Carl said, but I’ll note that Bostrom defined “existential risk” like this back in 2008:
Presumably we should replace “intelligent” here with “sentient” or similar. The reason I’m quoting this is that on the above definition, it sounds like any potential future event or process that would cost us a large portion of the future’s value counts as an xrisk (and therefore as a GCR). ‘Humanity’s moral progress stagnates or we otherwise end up with the wrong values’ sounds like a global catastrophic risk to me, on that definition. (From a perspective that does care about long-term issues, at least.)
I’ll note that I think there’s at least some disagreement at FHI / Open Phil / etc. about how best to define terms like “GCR”, and I don’t know if there’s currently a consensus or what that consensus is. Also worth noting that the “risk” part is more clearly relevant than the “global catastrophe” part—malaria and factory farming are arguably global catastrophes in Bostrom’s sense, but they aren’t “risks” in the relevant sense, because they’re already occurring.
“counts as an xrisk (and therefore as a GCR)”
My understanding: GCR = (something like) risk of major catastrophe that kills 100mn+ people
(I think the GCR book defines it as risk of 10mn+ deaths, but that seemed too low to me).
So, as I was using the term, something being an x-risk does not entail it being a GCR. I’d count ‘Humanity’s moral progress stagnates or we otherwise end up with the wrong values’ as an x-risk but not a GCR.
Interesting (/worrying!) how we’re understanding widely-used terms so differently.
Agree that that’s the most common operationalization of a GCR. It’s a bit inelegant for GCR not to include all x-risks though, especially given that it is used interchangeably within EA.
It would odd if the onset of a permanently miserable dictatorship didn’t count as a global catastrophe because no lives were lost.
Could you or Will provide an example of a source that explicitly uses “GCR” and “xrisk” in such a way that there are non-GCR xrisks? You say this is the most common operationalization, but I’m only finding examples that treat xrisk as a subset of GCR, as the Bostrom quote above does.
You’re right, it looks like most written texts, especially more formal ones give definitions where x-risks are equal or a strict subset. We should probably just try to roll that out to informal discussions and operationalisations too.
“Definition: Global Catastrophic Risk – risk of events or processes that would lead to the deaths of approximately a tenth of the world’s population, or have a comparable impact.” GCR Report
“A global catastrophic risk is a hypothetical future event that has the potential to damage human well-being on a global scale.” - Wiki
“Global catastrophic risk (GCR) is the risk of events large enough to significantly harm or even destroy human civilization at the global scale.” GCRI
“These represent global catastrophic risks—events that might kill a tenth of the world’s population.”—HuffPo